Stuart Russell
๐ค SpeakerVoice Profile Active
This person's voice can be automatically recognized across podcast episodes using AI voice matching.
Appearances Over Time
Podcast Appearances
And, you know, that is naturally going to make you think, okay, where does this end?
Well, so it doesn't take a genius to realize that if you make something that's smarter than you, you might have a problem.
You know, and Turing, Alan Turing, you know, wrote about this and gave lectures about this, you know, in 1951.
He did a lecture on the radio, and he basically says, you know, once the machine thinking method starts, you know, very quickly they'll outstrip humanity.
And, you know, if we're lucky, we might be able to, I think he says, if we may be able to turn off the power at strategic moments, but even so our species would be humbled.
And actually, I think he was wrong about that, right?
If it's a sufficiently intelligent machine, it's not going to let you switch it off if it's actually in competition with you.
I think he means that we would realize that we are inferior, right?
That we only survive by the skin of our teeth because we happen to get to the off switch.
That was a close call.
Just in time.
And if we hadn't, then we would have lost control over the earth.
So the paperclip scenarios kind of... I think... So the main problem I'm working on is the...
control problem, the problem of machines pursuing objectives that are, as you say, not aligned with human objectives.
And this has been the way we've thought about AI since the beginning.
You build a machine for optimizing and then you put in some objective and it optimizes.
And we can think of this as the King Midas problem.
Because if the King Midas put in this objective, everything I touch should turn to gold, and the gods, that's like the machine, they said, okay, done.
You know, you now have this power.
And, of course, his food and his drink and his family all turned to gold.