Tristan Harris
👤 PersonAppearances Over Time
Podcast Appearances
To race as fast as possible.
and meanwhile to ignore the consequences of what might come from that, to look away from the downsides.
But if you replace that confusion with global clarity that the current path is insane and that there is another path, and you take the denial of what we don't want to look at, and through witnessing that so clearly, we pop through the prophecy of self-fulfilling inevitability, and we realize that if everyone believes the default path is insane,
the rational choice is to coordinate to find another path.
And so clarity creates agency.
If we can be crystal clear, we can choose another path, just as we could have with social media.
And in the past, in the face of seemingly inevitable arms races, the race to do nuclear testing,
Once we got clear about the downside risks of nuclear tests and the world understood the science of that, we created the Nuclear Test Ban Treaty.
And a lot of people worked hard to create infrastructure to prevent that.
You could have said it was inevitable that germline editing to edit human genomes and to have super soldiers and designer babies would set off an arms race between nations.
Once the off-target effects of genome editing were made clear and the dangers were made clear, we've coordinated on that too.
You could have said that the ozone hole
was just inevitable, and that we should just do nothing and that we all perish as a species.
But that's not what we do.
When we recognize a problem, we solve the problem.
It's not inevitable.
And so what would it take to illuminate this narrow path?
Well, it starts with common knowledge about frontier risks.
If everybody building AI knew the latest understanding about where these risks are arising from, we would have much more chance of illuminating the contours of this path.
And there's some very basic steps we can take to prevent chaos.