Menu
Sign In Search Podcasts Charts People & Topics Add Podcast API Blog Pricing

Eliezer Yudkowsky

๐Ÿ‘ค Speaker
See mentions of this person in podcasts
1716 total appearances

Appearances Over Time

Podcast Appearances

Lex Fridman Podcast
#368 โ€“ Eliezer Yudkowsky: Dangers of AI and the End of Human Civilization

Yeah, the probabilistic stuff is a giant wasteland of, you know, Eliezer and Paul Cristiano arguing with each other and EA going like, And that's with like two actually trustworthy systems that are not trying to deceive you.

Lex Fridman Podcast
#368 โ€“ Eliezer Yudkowsky: Dangers of AI and the End of Human Civilization

You're talking about the two humans?

Lex Fridman Podcast
#368 โ€“ Eliezer Yudkowsky: Dangers of AI and the End of Human Civilization

Myself and Paul Cristiano, yeah.

Lex Fridman Podcast
#368 โ€“ Eliezer Yudkowsky: Dangers of AI and the End of Human Civilization

Yeah.

Lex Fridman Podcast
#368 โ€“ Eliezer Yudkowsky: Dangers of AI and the End of Human Civilization

Yeah, if it's hard to tell who's right, then it's hard to train an AI system to be right.

Lex Fridman Podcast
#368 โ€“ Eliezer Yudkowsky: Dangers of AI and the End of Human Civilization

No, I'm saying it's difficult and dangerous in proportion to how it's alien and how it's smarter than you.

Lex Fridman Podcast
#368 โ€“ Eliezer Yudkowsky: Dangers of AI and the End of Human Civilization

I would not say growing exponentially first, because the word exponential is a thing that has a particular mathematical meaning.

Lex Fridman Podcast
#368 โ€“ Eliezer Yudkowsky: Dangers of AI and the End of Human Civilization

And there's all kinds of ways for things to go up that are not exactly on an exponential curve.

Lex Fridman Podcast
#368 โ€“ Eliezer Yudkowsky: Dangers of AI and the End of Human Civilization

And I don't know that it's going to be exponential, so I'm not going to say exponential.

Lex Fridman Podcast
#368 โ€“ Eliezer Yudkowsky: Dangers of AI and the End of Human Civilization

But even leaving that aside, this is not about...

Lex Fridman Podcast
#368 โ€“ Eliezer Yudkowsky: Dangers of AI and the End of Human Civilization

How fast it's moving, it's about where it is.

Lex Fridman Podcast
#368 โ€“ Eliezer Yudkowsky: Dangers of AI and the End of Human Civilization

How alien is it?

Lex Fridman Podcast
#368 โ€“ Eliezer Yudkowsky: Dangers of AI and the End of Human Civilization

How much smarter than you is it?

Lex Fridman Podcast
#368 โ€“ Eliezer Yudkowsky: Dangers of AI and the End of Human Civilization

Well, how smart is it?

Lex Fridman Podcast
#368 โ€“ Eliezer Yudkowsky: Dangers of AI and the End of Human Civilization

Suppose that some alien civilization with goals ultimately unsympathetic to ours, possibly not even conscious as we would see it, managed to capture the entire Earth in a little jar connected to their version of the internet, but Earth is like running much faster than the aliens.

Lex Fridman Podcast
#368 โ€“ Eliezer Yudkowsky: Dangers of AI and the End of Human Civilization

So we get to think for 100 years for every one of their hours.

Lex Fridman Podcast
#368 โ€“ Eliezer Yudkowsky: Dangers of AI and the End of Human Civilization

but we're trapped in a little box and we're connected to their internet.

Lex Fridman Podcast
#368 โ€“ Eliezer Yudkowsky: Dangers of AI and the End of Human Civilization

It's actually still not all evacuated analogy because, you know, you want to be smarter than, you know, something can be smarter than earth getting a hundred years to think.

Lex Fridman Podcast
#368 โ€“ Eliezer Yudkowsky: Dangers of AI and the End of Human Civilization

But nonetheless, if you were very, very smart and you were stuck in a little box connected to the internet and you're in a larger civilization to which you are ultimately unsympathetic,

Lex Fridman Podcast
#368 โ€“ Eliezer Yudkowsky: Dangers of AI and the End of Human Civilization

You know, maybe you would choose to be nice because you are humans and humans have, in general, and you in particular, they choose to be nice.