Tristan Harris
๐ค SpeakerAppearances Over Time
Podcast Appearances
People should be saying, how do we pass AI liability laws so there's at least some responsibility for the externalities that are not showing up on the balance sheets of these companies?
What is the lesson we learned from social media?
That if the companies aren't responsible for the harms that show up on their platform because we had the Section 230 free pass, that created this blank check to just go print money on all the harms that are currently getting generated.
So there's a dozen things that we can do from whistleblower protections to shipping non-anthropomorphized AI relationships to having data dividends and data taxes.
There's a hundred things that we can do.
But the main thing is for the world to get clear that we don't want the current path.
And I think in order to make that happen, there has to be first snapping out of the spell of everything that's happening is just inevitable.
Because I want people to notice that what's driving this whole race that we're in right now is the belief that everything that's happening is inevitable.
There's no way to stop it.
Someone's going to build it.
If I don't build it, someone else will.
And then no one tries to do anything to get to a different future.
And so we all just kind of hide in denial from where we're currently heading.
And I want people to actually confront that reality so that we can actually actively choose to steer to a different direction.
We've decided we don't want to do that.
We have faced technological arms races before from nuclear weapons.
And, you know, what do we do there?
If you go back, there's a great video from, I think, the 1960s where Robert Oppenheimer was asked, you know, how do we stop the spread of nuclear weapons?
And he takes a big puff of his cigarette and he says, it's too late.
If you wanted to stop it, you would have had to stop the day after Trinity.