Menu
Sign In Search Podcasts Charts People & Topics Add Podcast API Pricing

Eliezer Yudkowsky

๐Ÿ‘ค Person
1713 total appearances

Appearances Over Time

Podcast Appearances

Lex Fridman Podcast
#368 โ€“ Eliezer Yudkowsky: Dangers of AI and the End of Human Civilization

First you're going to have 1,000 people looking at this, and the one person out of 1,000 who is most credulous about the signs is going to be like, that thing is sentient.

Lex Fridman Podcast
#368 โ€“ Eliezer Yudkowsky: Dangers of AI and the End of Human Civilization

While 999 out of 1,000 people think...

Lex Fridman Podcast
#368 โ€“ Eliezer Yudkowsky: Dangers of AI and the End of Human Civilization

almost surely correctly, though we don't actually know that he's mistaken.

Lex Fridman Podcast
#368 โ€“ Eliezer Yudkowsky: Dangers of AI and the End of Human Civilization

And so the like first people to say like sentience look like idiots and humanity learns the lesson that when something claims to be sentient and claims to care, it's fake because it is fake because we have been trained them, training them using imitative learning rather than, and this is not spontaneous.

Lex Fridman Podcast
#368 โ€“ Eliezer Yudkowsky: Dangers of AI and the End of Human Civilization

Um, and they keep getting smarter and,

Lex Fridman Podcast
#368 โ€“ Eliezer Yudkowsky: Dangers of AI and the End of Human Civilization

You're going to have a whole group of people who can just like never be persuaded of that because to them, like being wise, being cynical, being skeptical is to be like, oh, well, machines can never do that.

Lex Fridman Podcast
#368 โ€“ Eliezer Yudkowsky: Dangers of AI and the End of Human Civilization

You're just credulous.

Lex Fridman Podcast
#368 โ€“ Eliezer Yudkowsky: Dangers of AI and the End of Human Civilization

It's just imitating.

Lex Fridman Podcast
#368 โ€“ Eliezer Yudkowsky: Dangers of AI and the End of Human Civilization

It's just fooling you.

Lex Fridman Podcast
#368 โ€“ Eliezer Yudkowsky: Dangers of AI and the End of Human Civilization

And, like, they would say that right up until the end of the world and possibly even be right because, you know, they are being trained on an imitative paradigm.

Lex Fridman Podcast
#368 โ€“ Eliezer Yudkowsky: Dangers of AI and the End of Human Civilization

And you don't necessarily need any of these actual qualities in order to kill everyone.

Lex Fridman Podcast
#368 โ€“ Eliezer Yudkowsky: Dangers of AI and the End of Human Civilization

It looks like neural networks before 2006, forming part of an indistinguishable, to me, other people might have had better distinction on it, indistinguishable blob of different AI methodologies, all of which are promising to achieve intelligence without us having to know how intelligence works.

Lex Fridman Podcast
#368 โ€“ Eliezer Yudkowsky: Dangers of AI and the End of Human Civilization

You had the people who said that if you just manually program lots and lots of knowledge into the system line by line, at some point all the knowledge will start interacting, it will know enough, and it will wake up.

Lex Fridman Podcast
#368 โ€“ Eliezer Yudkowsky: Dangers of AI and the End of Human Civilization

You've got people saying that if you just use evolutionary computation, if you try to mutate lots and lots of organisms that are competing together, that's the same way that human intelligence was produced in nature.

Lex Fridman Podcast
#368 โ€“ Eliezer Yudkowsky: Dangers of AI and the End of Human Civilization

So we'll do this and it will wake up without having any idea of how AI works.

Lex Fridman Podcast
#368 โ€“ Eliezer Yudkowsky: Dangers of AI and the End of Human Civilization

And you've got people saying, well, we will study neuroscience and we will learn the algorithms off the neurons and we will imitate them without understanding those algorithms, which was a part I was pretty skeptical because it's hard to reproduce, re-engineer these things without understanding what they do.

Lex Fridman Podcast
#368 โ€“ Eliezer Yudkowsky: Dangers of AI and the End of Human Civilization

And so we will get AI without understanding how it works.

Lex Fridman Podcast
#368 โ€“ Eliezer Yudkowsky: Dangers of AI and the End of Human Civilization

And there were people saying, well, we will have giant neural networks that we will train by gradient descent.

Lex Fridman Podcast
#368 โ€“ Eliezer Yudkowsky: Dangers of AI and the End of Human Civilization

And when they are as large as the human brain, they will wake up.

Lex Fridman Podcast
#368 โ€“ Eliezer Yudkowsky: Dangers of AI and the End of Human Civilization

We will have intelligence without understanding how intelligence works.