Tristan Harris
👤 PersonAppearances Over Time
Podcast Appearances
So maybe just quickly to break down the current logic, like why are we doing what we're doing?
If I'm one of the major AI labs, I currently believe this is inevitable.
If I don't build it, someone worse will.
If we win, we'll get utopia and it'll be our utopia and the other guys won't have it.
So the default path is to race as fast as possible.
Ironically, one of the reasons that they think that they should race is because they believe the other actors are not trustworthy with that power.
But because they're racing, they have to take so many shortcuts that they themselves become a bad steward of that power.
And everybody else reinforces that.
And what that leads to is this sort of race to the cliff, bad situation.
If we can clarify, we're not all going to win if we race like this.
We're going to have catastrophes that are not going to help us get to the world that we're all after.
And everybody agrees that it's insane.
Instead of racing to out-compete, we can help coordinate the narrow path.
Again, the narrow path is avoiding chaos, avoiding dystopia, and rolling out any technology, in particular AI, with foresight, discernment, and where power is matched with responsibility.
It starts with common knowledge about where those risks are.
So for example, a lot of people don't even know that the AI models lie in scheme when you tell them they're going to be shut down.
Every single person building AI should know that.
Have we done that?
Have we even tried throwing millions of dollars at educating or creating those solutions?
Like for example, GitHub, when you download the latest AI model, it could say as a requirement for downloading this AI model, you have to know about the most recent sort of AI loss of control risks.