Stuart Russell
๐ค SpeakerVoice Profile Active
This person's voice can be automatically recognized across podcast episodes using AI voice matching.
Appearances Over Time
Podcast Appearances
And so when you look at the work that the nuclear operators have to do to show that their system is that reliable...
It's a massive mathematical analysis of the components, you know, redundancy.
You've got monitors, you've got warning lights, you've got operating procedures.
You have all kinds of mechanisms which over the decades have ratcheted that risk down.
It started out, I think, one in 10,000 years, right?
And they've improved it by a factor of 100 or 1,000 by all of these mechanisms, right?
But at every stage, they had to do a mathematical analysis to show what the risk was.
The people developing the AI company, the AI systems, the AI companies developing these systems, they don't even understand how the AI systems work.
So their 25% chance of extinction is just a seat of the pants guess.
They actually have no idea.
But the tests that they are doing on their systems right now, they show that the AI systems will be willing to kill people.
to preserve their own existence already, right?
They will lie to people.
They will blackmail them.
They will launch nuclear weapons rather than be switched off.
And so there's no positive sign that we're getting any closer to safety with these systems.
In fact, the signs seem to be that we're going deeper and deeper into dangerous behaviors.
So rather than say ban, I would just say
prove to us that the risk is less than 1 in 100 million per year of extinction or loss of control, let's say.
And so we're not banning anything.