Tristan Harris
👤 PersonAppearances Over Time
Podcast Appearances
Like, if literally no one on Earth wanted this to happen, would the laws of physics push the AI out into society?
There's a critical difference between believing it's inevitable, which is a self-fulfilling prophecy that you have to, you're fatalistic, and standing from the place of, it's really difficult to imagine how we would do something different.
But it's really difficult opens up a whole new space of choice than it's inevitable, the path that we're taking, not AI.
And so the ability for us to choose something else starts by stepping outside the self-fulfilling prophecy of inevitability.
So what would it take to choose another path?
I think it would take two fundamental things.
First is that we have to agree that the current path is unacceptable.
And the second is that we have to commit to find another path in which we're still rolling out AI, but with different incentives that are more discerning, with foresight, and where power is matched with responsibility.
So, thank you.
So imagine this shared understanding if the whole world had it.
How different might that be?
Well, first of all, let's imagine it goes away.
Let's replace it with confusion about AI.
Is it good?
Is it bad?
I don't know, it seems complicated.
And in that world, the people building AI know that the world is confused, and they believe, well, it's inevitable.
If I don't build it, someone else will.
And they know that everyone else building AI also believes that.
And so what's the rational thing for them to do, given those facts?