Menu
Sign In Search Podcasts Charts People & Topics Add Podcast API Blog Pricing

Eliezer Yudkowsky

๐Ÿ‘ค Speaker
See mentions of this person in podcasts
1716 total appearances

Appearances Over Time

Podcast Appearances

Lex Fridman Podcast
#368 โ€“ Eliezer Yudkowsky: Dangers of AI and the End of Human Civilization

Aren't you supposing something that's smart enough to be dangerous, but also stupid enough that it will just make paperclips and never question that?

Lex Fridman Podcast
#368 โ€“ Eliezer Yudkowsky: Dangers of AI and the End of Human Civilization

In some cases, people are like, well, even if you like misspecify the objective function, won't you realize that what you really wanted was X?

Lex Fridman Podcast
#368 โ€“ Eliezer Yudkowsky: Dangers of AI and the End of Human Civilization

Are you supposing something that is like smart enough to be dangerous, but stupid enough that it doesn't understand what the humans really meant when they specified the objective function?

Lex Fridman Podcast
#368 โ€“ Eliezer Yudkowsky: Dangers of AI and the End of Human Civilization

Well, I'm saying that it's that, well, what I'm saying is like, what you think about artificial intelligence depends on what you think about intelligence.

Lex Fridman Podcast
#368 โ€“ Eliezer Yudkowsky: Dangers of AI and the End of Human Civilization

So how do we think about intelligence correctly?

Lex Fridman Podcast
#368 โ€“ Eliezer Yudkowsky: Dangers of AI and the End of Human Civilization

And also there's like, is made of John von Neumann and has like, and there's lots of them.

Lex Fridman Podcast
#368 โ€“ Eliezer Yudkowsky: Dangers of AI and the End of Human Civilization

Because we understand that, yeah, we understand, like, Jonathan Newman is a historical case, so you can, like, look up what he did and imagine based on that.

Lex Fridman Podcast
#368 โ€“ Eliezer Yudkowsky: Dangers of AI and the End of Human Civilization

And we know, like, people have, like, some intuition for, like, if you have more humans, they can solve tougher cognitive problems.

Lex Fridman Podcast
#368 โ€“ Eliezer Yudkowsky: Dangers of AI and the End of Human Civilization

Although, in fact, like, in the game of Kasparov versus the world, which was, like, Garry Kasparov on one side, and...

Lex Fridman Podcast
#368 โ€“ Eliezer Yudkowsky: Dangers of AI and the End of Human Civilization

an entire horde of internet people led by four chess grandmasters on the other side, Kasparov won.

Lex Fridman Podcast
#368 โ€“ Eliezer Yudkowsky: Dangers of AI and the End of Human Civilization

So, like, all those people aggregated to be smarter.

Lex Fridman Podcast
#368 โ€“ Eliezer Yudkowsky: Dangers of AI and the End of Human Civilization

It was a hard-fought game.

Lex Fridman Podcast
#368 โ€“ Eliezer Yudkowsky: Dangers of AI and the End of Human Civilization

So, like, all those people aggregated to be smarter than any individual one of them, but they didn't aggregate so well that they could defeat Kasparov.

Lex Fridman Podcast
#368 โ€“ Eliezer Yudkowsky: Dangers of AI and the End of Human Civilization

So humans aggregating don't actually get, in my opinion, very much smarter, especially compared to running them for longer.

Lex Fridman Podcast
#368 โ€“ Eliezer Yudkowsky: Dangers of AI and the End of Human Civilization

The difference between capabilities now and a thousand years ago is a bigger gap than the gap in capabilities between 10 people and one person.

Lex Fridman Podcast
#368 โ€“ Eliezer Yudkowsky: Dangers of AI and the End of Human Civilization

But even so, pumping intuition for what it means to augment intelligence, John von Neumann, there's millions of him.

Lex Fridman Podcast
#368 โ€“ Eliezer Yudkowsky: Dangers of AI and the End of Human Civilization

He runs at a million times the speed and therefore can solve tougher problems, quite a lot tougher.

Lex Fridman Podcast
#368 โ€“ Eliezer Yudkowsky: Dangers of AI and the End of Human Civilization

If one studies evolutionary biology with a bit of math,

Lex Fridman Podcast
#368 โ€“ Eliezer Yudkowsky: Dangers of AI and the End of Human Civilization

And in particular, books from when the field was just sort of properly coalescing and knowing itself.

Lex Fridman Podcast
#368 โ€“ Eliezer Yudkowsky: Dangers of AI and the End of Human Civilization

Not the modern textbooks, which are just like, memorize this legible math so you can do well on these tests.