Reed Hastings
๐ค SpeakerAppearances Over Time
Podcast Appearances
And so it is the white-collar canary in the coal mine.
And if software engineering jobs go down a lot over the next five years, then that's probably going to happen to law and architecture and many other things.
If, in fact, because of the increased productivity, people are building a lot more software, sort of the radiology example, then I think in many professions we will see a big expansion and we can be more confident of the high productivity.
None of that really answers the long-term question that you asked, which is, how are we not the Neanderthals?
So I think reasonable chance, high productivity rather than mass unemployment.
But then ultimately, what if they're smarter and smarter and smarter than us?
We're going to have to find ways, and I don't know what they are, to both continue to insist on alignment, and that's where you train the AIs to care about human beings.
So they are aligned with our values.
But if somebody doesn't train, somebody programs their AI to try to take over the world, we're going to have to enlist the other AIs on our defense to protect us.
Okay, so there's a number of scenarios out there.
And probably for 10, 20 years, we're not gonna know how serious the threat is, but we will have tools.
It's not just that the AI, biological species, we have been selected for dominance to try to grow our species.
So AI is not naturally trying to expand, not naturally, it can be programmed for that, but it can be also programmed to keep all humans on top.
So it's not as scary as a super powerful human, which we all kind of intuit.
A super powerful human would be hard to hold back from taking over the world.
It's not as dire as that.
Well, I think lots of the industry is working on it.
So there's different sides of safety.
So there's when you're treating AI like a counselor and it helps you tie a noose, that's not a good thing.
And so those cases across the industry are getting more and more