Reed Hastings
๐ค SpeakerAppearances Over Time
Podcast Appearances
We're going to have to find ways, and I don't know what they are, to both continue to insist on alignment, and that's where you train the AIs to care about human beings.
So they are aligned with our values.
But if somebody doesn't train, somebody programs their AI to try to take over the world, we're going to have to enlist the other AIs on our defense to protect us.
Okay, so there's a number of scenarios out there.
And probably for 10, 20 years, we're not gonna know how serious the threat is, but we will have tools.
It's not just that the AI, biological species, we have been selected for dominance to try to grow our species.
So AI is not naturally trying to expand, not naturally, it can be programmed for that, but it can be also programmed to keep all humans on top.
So it's not as scary as a super powerful human, which we all kind of intuit.
A super powerful human would be hard to hold back from taking over the world.
It's not as dire as that.
Well, I think lots of the industry is working on it.
So there's different sides of safety.
So there's when you're treating AI like a counselor and it helps you tie a noose, that's not a good thing.
And so those cases across the industry are getting more and more
watched for and eliminated.
So there's inevitable safety bumps as any technology grows.
So then there's more macro safety, like none of the major AIs can be used to design chemical weapons or biological weapons.
But those defenses in the AIs
aren't perfect and we have to constantly invest in them to prevent people from using this super powerful technology plus some CRISPR to do some really bad things.
So there's active work across the whole industry on those scenarios.