Stuart Russell
๐ค SpeakerVoice Profile Active
This person's voice can be automatically recognized across podcast episodes using AI voice matching.
Appearances Over Time
Podcast Appearances
The main thing that the AI community is familiar with in my work is a textbook that I wrote.
So it was with one of the CEOs of a leading AI company.
He sees two possibilities, as do I, which is either we have a small or, let's say, small-scale disaster of the same scale as Chernobyl.
Yeah, so this nuclear plant blew up in 1986, killed a fair number of people directly and...
maybe tens of thousands of people indirectly through radiation.
Recent cost estimates, more than a trillion dollars.
So that would wake people up.
That would get the governments to regulate.
He's talked to the governments and they won't do it.
So he looked at this Chernobyl scale disaster as the best case scenario because then the governments would regulate.
Yeah, it wouldn't have to be a nuclear disaster.
It would be either an AI system that's being misused by someone, for example, to engineer a pandemic, or an AI system that does something itself, such as crashing our financial system or our communication systems.
The alternative is a much worse disaster where we just lose control altogether.
Yes, it's...
It must be a very difficult position to be in, in a sense, right?
You're doing something that you know has a good chance of bringing an end to life on Earth, including that of yourself and your own family.
They feel that they can't escape this race, right?
If a CEO of one of those companies was to say, you know, we're not going to do this anymore, they would just be replaced.
Because the investors are putting their money up because they want to create AGI and reap the benefits of it.
So it's a strange situation where at least all the ones I've spoken to, I haven't spoken to Sam Altman about this, but Sam Altman, even before becoming CEO of OpenAI...