Stuart Russell
๐ค SpeakerVoice Profile Active
This person's voice can be automatically recognized across podcast episodes using AI voice matching.
Appearances Over Time
Podcast Appearances
The company's response is, well, we don't know how to do that, so you can't have a rule.
Literally, they are saying humanity has no right to protect itself from us.
Yes, and the aliens will write an amazing, tragic play cycle about what happened to the human race.
Yeah, which is the wrong way around.
But God is still watching over us and probably wondering when we're going to get our act together.
So I think the question of whether it's possible to make super intelligent AI systems that we can control.
Is it possible?
Yeah.
I think, yes, I think it's possible.
And I think we need to actually just have a different conception of what it is we're trying to build.
For a long time with AI, we've just had this notion of pure intelligence, right?
The ability to bring about whatever future you, the intelligent entity, want to bring about.
The more intelligence, the better.
The more intelligent, the better, and the more capability it will have to create the future that it wants.
And actually, we don't want pure intelligence because what the future that it wants might not be the future that we want.
The universe doesn't single humans out as the only thing that matters, right?
Pure intelligence might decide that actually it's going to make life wonderful for cockroaches or actually doesn't care about biological life at all.
We actually want intelligence whose only purpose is to bring about the future that we want.
So we want it to be, first of all, keyed to humans specifically, not to cockroaches, not to aliens, not to itself.
We want to make it loyal to humans.