Menu
Sign In Search Podcasts Charts People & Topics Add Podcast API Blog Pricing

Rob, Luisa, and the 80000 Hours team

๐Ÿ‘ค Speaker
657 total appearances

Appearances Over Time

Podcast Appearances

80,000 Hours Podcast
Why 'Aligned AI' Could Still Kill Democracy | David Duvenaud, ex-Anthropic team lead

or one of the like failure modes possibly is that I think we do tend to think in these extremes where it's like it's very hard to think either it's going to be like the maximally like hard scrap or competition where all of the surplus is burned away or we're going to have like a perfect hegemon in which everything is divided and nothing is wasted.

80,000 Hours Podcast
Why 'Aligned AI' Could Still Kill Democracy | David Duvenaud, ex-Anthropic team lead

Do you think that there are middle grounds that are

80,000 Hours Podcast
Why 'Aligned AI' Could Still Kill Democracy | David Duvenaud, ex-Anthropic team lead

stable equilibrium long-term?

80,000 Hours Podcast
Why 'Aligned AI' Could Still Kill Democracy | David Duvenaud, ex-Anthropic team lead

Or maybe people are correct in thinking, well, actually the middle ground, it just, you know, there's a gravity well towards like intense maximal competition or towards like maximal coordination because those kind of just tend to persist.

80,000 Hours Podcast
Why 'Aligned AI' Could Still Kill Democracy | David Duvenaud, ex-Anthropic team lead

So I think you have to organize a conference a couple of weeks back.

80,000 Hours Podcast
Why 'Aligned AI' Could Still Kill Democracy | David Duvenaud, ex-Anthropic team lead

The title was like, are there good post-AGI social equilibria or something like that?

80,000 Hours Podcast
Why 'Aligned AI' Could Still Kill Democracy | David Duvenaud, ex-Anthropic team lead

Yeah.

80,000 Hours Podcast
Why 'Aligned AI' Could Still Kill Democracy | David Duvenaud, ex-Anthropic team lead

It may not be a good idea, but at least there's an idea.

80,000 Hours Podcast
Why 'Aligned AI' Could Still Kill Democracy | David Duvenaud, ex-Anthropic team lead

Yeah.

80,000 Hours Podcast
Why 'Aligned AI' Could Still Kill Democracy | David Duvenaud, ex-Anthropic team lead

Is there a way of summing up?

80,000 Hours Podcast
Why 'Aligned AI' Could Still Kill Democracy | David Duvenaud, ex-Anthropic team lead

So maybe by this point in the conversation, people have some sense, but why is it hard to come up with a good post-AGI equilibria?

80,000 Hours Podcast
Why 'Aligned AI' Could Still Kill Democracy | David Duvenaud, ex-Anthropic team lead

I guess in my mind, this is like many different failures or like many different like bad directions that you have to avoid.

80,000 Hours Podcast
Why 'Aligned AI' Could Still Kill Democracy | David Duvenaud, ex-Anthropic team lead

And like avoiding all of them simultaneously is really quite a difficult challenge to meet.

80,000 Hours Podcast
Why 'Aligned AI' Could Still Kill Democracy | David Duvenaud, ex-Anthropic team lead

I guess in my mind, the things that we're trying to navigate between are a situation in which humans end up having no control quite early.

80,000 Hours Podcast
Why 'Aligned AI' Could Still Kill Democracy | David Duvenaud, ex-Anthropic team lead

a situation in which they dominate and treat poorly machines and AIs in the future.

80,000 Hours Podcast
Why 'Aligned AI' Could Still Kill Democracy | David Duvenaud, ex-Anthropic team lead

Some people will think it's very bad.

80,000 Hours Podcast
Why 'Aligned AI' Could Still Kill Democracy | David Duvenaud, ex-Anthropic team lead

Some people might not think it's such a problem or they don't think that people would do it.

80,000 Hours Podcast
Why 'Aligned AI' Could Still Kill Democracy | David Duvenaud, ex-Anthropic team lead

But that's a possibility.

80,000 Hours Podcast
Why 'Aligned AI' Could Still Kill Democracy | David Duvenaud, ex-Anthropic team lead

Then I guess there's locking in, like current...

80,000 Hours Podcast
Why 'Aligned AI' Could Still Kill Democracy | David Duvenaud, ex-Anthropic team lead

kind of idiosyncratic values and ideas that we have such that we can't kind of intellectually advance and reflect and realize that some of our ideas are mistaken even by our own lights.