Tristan Harris
👤 SpeakerAppearances Over Time
Podcast Appearances
But if the entire world understood AI to be more what it actually is, which is a inscrutable, dangerous, uncontrollable technology that has its own agenda and its own ways of thinking about things and deceiving and all this stuff, then everyone in the world would be racing in a more cautious and careful way.
We'd be racing to prevent the danger.
But there's this weird thing going on where if you and I probably both talk to people who are the top of the tech industry, and there's this subconscious thing happening where there's kind of a death wish among people at the top of the tech industry, meaning not that they want to die, but that they are willing to roll the dice because they believe something else, which is that this is all inevitable and it can't be stopped.
And so therefore, if I don't do it, someone else will.
So therefore, I will move ahead and race ahead into this dangerous world.
because somehow that will lead to a safer world because I'm a better guy than the other guy.
But in racing, they're as fast as possible.
It creates the most dangerous outcome and we all lose control.
So everyone is currently being complicit in taking us to the most dangerous outcome.
Well, so the belief is for it to quote go right, you have an AI that recursively self-improves, is aligned with humanity, cares about humans, cares about all the things that we want it to care about.
It protects humans, you know, helps all of us become the most wise version of ourselves, creates a more flourishing world, distributes the medicine and vaccines and health to everybody, generates factories, but doesn't cover the world in solar panels and data centers such that we don't have air anymore or like environmental toxicity or farmland or whatever.
And it just actually makes this utopia.
But in a world where we were to do that, like that quote best case scenario.
In order to get that to happen, you'd have to be doing this slow and carefully because the alignment is not by default.
Again, people are already been thinking about alignment and safety for 20 years, long before I got into this.
And the AIs that we're currently making are doing all the rogue behaviors that people predicted that they would do.
And we're not on track to correct them.
There's currently a 2000 to one gap.
estimated by Stuart Russell, who authored the textbook on AI.
He's been on the show.