Menu
Sign In Search Podcasts Charts People & Topics Add Podcast API Blog Pricing

Eliezer Yudkowsky

๐Ÿ‘ค Speaker
See mentions of this person in podcasts
1713 total appearances

Appearances Over Time

Podcast Appearances

Lex Fridman Podcast
#368 โ€“ Eliezer Yudkowsky: Dangers of AI and the End of Human Civilization

And what they would be seeing was that I had correctly understood them, but not that I was similar to them.

Lex Fridman Podcast
#368 โ€“ Eliezer Yudkowsky: Dangers of AI and the End of Human Civilization

Ask Robin Hanson.

Lex Fridman Podcast
#368 โ€“ Eliezer Yudkowsky: Dangers of AI and the End of Human Civilization

He has this lovely, grabby aliens paper, which is more or less the only argument I've ever seen for where are they, how many of them are there, based on...

Lex Fridman Podcast
#368 โ€“ Eliezer Yudkowsky: Dangers of AI and the End of Human Civilization

A very clever argument that if you have a bunch of locks of different difficulty and you are randomly trying keys to them, the solutions will be about evenly spaced, even if the locks are of different difficulties.

Lex Fridman Podcast
#368 โ€“ Eliezer Yudkowsky: Dangers of AI and the End of Human Civilization

In the rare cases where a solution to all the locks exist in time, then Robin Hanson looks at the arguable hard steps in human civilization coming into existence and

Lex Fridman Podcast
#368 โ€“ Eliezer Yudkowsky: Dangers of AI and the End of Human Civilization

and how much longer it has left to come into existence before, for example, all the water slips back under the crust into the mantle and so on.

Lex Fridman Podcast
#368 โ€“ Eliezer Yudkowsky: Dangers of AI and the End of Human Civilization

And infers that the aliens are about half a billion to a billion light years away.

Lex Fridman Podcast
#368 โ€“ Eliezer Yudkowsky: Dangers of AI and the End of Human Civilization

And it's quite a clever calculation.

Lex Fridman Podcast
#368 โ€“ Eliezer Yudkowsky: Dangers of AI and the End of Human Civilization

It may be entirely wrong, but it's the only time I've ever seen anybody even come up with a halfway good argument for how many of them, where are they?

Lex Fridman Podcast
#368 โ€“ Eliezer Yudkowsky: Dangers of AI and the End of Human Civilization

If it ends up anywhere, it ends up at AGI.

Lex Fridman Podcast
#368 โ€“ Eliezer Yudkowsky: Dangers of AI and the End of Human Civilization

Maybe there are aliens who are just like the dolphins.

Lex Fridman Podcast
#368 โ€“ Eliezer Yudkowsky: Dangers of AI and the End of Human Civilization

And it's just like too hard for them to forge metal.

Lex Fridman Podcast
#368 โ€“ Eliezer Yudkowsky: Dangers of AI and the End of Human Civilization

And, you know, this is not, you know, maybe if you have aliens with no technology like that, they keep on getting smarter and smarter and smarter.

Lex Fridman Podcast
#368 โ€“ Eliezer Yudkowsky: Dangers of AI and the End of Human Civilization

And eventually the dolphins figure, like the super dolphins figure out something very clever to do given their situation.

Lex Fridman Podcast
#368 โ€“ Eliezer Yudkowsky: Dangers of AI and the End of Human Civilization

And they still...

Lex Fridman Podcast
#368 โ€“ Eliezer Yudkowsky: Dangers of AI and the End of Human Civilization

end up with high technology.

Lex Fridman Podcast
#368 โ€“ Eliezer Yudkowsky: Dangers of AI and the End of Human Civilization

And in that case, they can probably solve their AGI alignment problem.

Lex Fridman Podcast
#368 โ€“ Eliezer Yudkowsky: Dangers of AI and the End of Human Civilization

If they're much smarter before they actually confront it because they had to solve a much harder environmental problem to build computers, their chances are probably much better than ours.

Lex Fridman Podcast
#368 โ€“ Eliezer Yudkowsky: Dangers of AI and the End of Human Civilization

I do worry that most of the aliens who are like humans, like a modern human civilization, I kind of worry that the super vast majority of them are dead.

Lex Fridman Podcast
#368 โ€“ Eliezer Yudkowsky: Dangers of AI and the End of Human Civilization

given how far we seem to be from solving this problem.