Eliezer Yudkowsky
๐ค PersonAppearances Over Time
Podcast Appearances
First you're going to have 1,000 people looking at this, and the one person out of 1,000 who is most credulous about the signs is going to be like, that thing is sentient.
While 999 out of 1,000 people think...
almost surely correctly, though we don't actually know that he's mistaken.
And so the like first people to say like sentience look like idiots and humanity learns the lesson that when something claims to be sentient and claims to care, it's fake because it is fake because we have been trained them, training them using imitative learning rather than, and this is not spontaneous.
Um, and they keep getting smarter and,
You're going to have a whole group of people who can just like never be persuaded of that because to them, like being wise, being cynical, being skeptical is to be like, oh, well, machines can never do that.
You're just credulous.
It's just imitating.
It's just fooling you.
And, like, they would say that right up until the end of the world and possibly even be right because, you know, they are being trained on an imitative paradigm.
And you don't necessarily need any of these actual qualities in order to kill everyone.
It looks like neural networks before 2006, forming part of an indistinguishable, to me, other people might have had better distinction on it, indistinguishable blob of different AI methodologies, all of which are promising to achieve intelligence without us having to know how intelligence works.
You had the people who said that if you just manually program lots and lots of knowledge into the system line by line, at some point all the knowledge will start interacting, it will know enough, and it will wake up.
You've got people saying that if you just use evolutionary computation, if you try to mutate lots and lots of organisms that are competing together, that's the same way that human intelligence was produced in nature.
So we'll do this and it will wake up without having any idea of how AI works.
And you've got people saying, well, we will study neuroscience and we will learn the algorithms off the neurons and we will imitate them without understanding those algorithms, which was a part I was pretty skeptical because it's hard to reproduce, re-engineer these things without understanding what they do.
And so we will get AI without understanding how it works.
And there were people saying, well, we will have giant neural networks that we will train by gradient descent.
And when they are as large as the human brain, they will wake up.
We will have intelligence without understanding how intelligence works.