Menu
Sign In Search Podcasts Charts People & Topics Add Podcast API Pricing

Eliezer Yudkowsky

๐Ÿ‘ค Speaker
See mentions of this person in podcasts
1713 total appearances

Appearances Over Time

Podcast Appearances

Lex Fridman Podcast
#368 โ€“ Eliezer Yudkowsky: Dangers of AI and the End of Human Civilization

There's a difference between the notion that the actress is somehow manipulative.

Lex Fridman Podcast
#368 โ€“ Eliezer Yudkowsky: Dangers of AI and the End of Human Civilization

For example, GPT-3, I'm guessing, to whatever extent there's an alien actress in there versus something that mistakenly believes it's a human, as it were, while maybe not even being a person.

Lex Fridman Podcast
#368 โ€“ Eliezer Yudkowsky: Dangers of AI and the End of Human Civilization

Yeah.

Lex Fridman Podcast
#368 โ€“ Eliezer Yudkowsky: Dangers of AI and the End of Human Civilization

So the question of prediction via alien actress cogitating versus prediction via being isomorphic to the thing predicted is a spectrum.

Lex Fridman Podcast
#368 โ€“ Eliezer Yudkowsky: Dangers of AI and the End of Human Civilization

And to whatever extent this is an alien actress, I'm not sure that there's a whole person alien actress with different goals from predicting the next step, being manipulative or anything like that.

Lex Fridman Podcast
#368 โ€“ Eliezer Yudkowsky: Dangers of AI and the End of Human Civilization

That might be GPT-5.

Lex Fridman Podcast
#368 โ€“ Eliezer Yudkowsky: Dangers of AI and the End of Human Civilization

Or GPT-6, even.

Lex Fridman Podcast
#368 โ€“ Eliezer Yudkowsky: Dangers of AI and the End of Human Civilization

It's one of a bunch of things that change at different points.

Lex Fridman Podcast
#368 โ€“ Eliezer Yudkowsky: Dangers of AI and the End of Human Civilization

I'm trying to get out ahead of the curve here, but if you imagine what the textbook from the future would say, if we'd actually been able to study this for 50 years without killing ourselves and without transcending, and you just imagine a wormhole opens and a textbook from that impossible world falls out, the textbook is not going to say, there is a single sharp threshold where everything changes.

Lex Fridman Podcast
#368 โ€“ Eliezer Yudkowsky: Dangers of AI and the End of Human Civilization

It's going to be like, of course we know that best practices for aligning these systems must take into account the following...

Lex Fridman Podcast
#368 โ€“ Eliezer Yudkowsky: Dangers of AI and the End of Human Civilization

like seven major thresholds of importance, which are passed at the following seven different points, is what the textbook is going to say.

Lex Fridman Podcast
#368 โ€“ Eliezer Yudkowsky: Dangers of AI and the End of Human Civilization

The textbook isn't going to talk about big leaps, because big leaps are the way you think when you have a very simple scientific model of what's going on, where it's just like, all this stuff is there, or all this stuff is not there.

Lex Fridman Podcast
#368 โ€“ Eliezer Yudkowsky: Dangers of AI and the End of Human Civilization

Or there's a single quantity and it's increasing linearly.

Lex Fridman Podcast
#368 โ€“ Eliezer Yudkowsky: Dangers of AI and the End of Human Civilization

The textbook would say, well, and then GPT-3 had capability W, X, Y, and GPT-4 had capability Z1, Z2, and Z3.

Lex Fridman Podcast
#368 โ€“ Eliezer Yudkowsky: Dangers of AI and the End of Human Civilization

Not in terms of what it can externally do, but in terms of internal machinery that started to be present.

Lex Fridman Podcast
#368 โ€“ Eliezer Yudkowsky: Dangers of AI and the End of Human Civilization

It's just because we have no idea of what the internal machinery is that we are not already seeing like chunks of machinery appearing piece by piece as they no doubt have been.

Lex Fridman Podcast
#368 โ€“ Eliezer Yudkowsky: Dangers of AI and the End of Human Civilization

We just don't know what they are.

Lex Fridman Podcast
#368 โ€“ Eliezer Yudkowsky: Dangers of AI and the End of Human Civilization

Sure, but like humans having great leaps in their map, their understanding of the system is a very different concept from the system itself acquiring new chunks of machinery.

Lex Fridman Podcast
#368 โ€“ Eliezer Yudkowsky: Dangers of AI and the End of Human Civilization

Oh, it's been like vastly exceeding the, yeah, the rate to which it's gaining capabilities is vastly overracing our ability to understand what's going on in there.

Lex Fridman Podcast
#368 โ€“ Eliezer Yudkowsky: Dangers of AI and the End of Human Civilization

There is a whole team of developers there that also gets credit.