Tristan Harris
๐ค SpeakerAppearances Over Time
Podcast Appearances
Which is that with AI, it's like you talk about the rogue examples and Alibaba and all this crazy stuff of self-exploitation and AIs that are preserving their peers, like not even doing self-preservation, but peer preservation.
And you walk people through all this stuff and it's like you're stretching people's minds out like a rubber band.
But then if you let go and they go back to their life a week later, they're not operating from a place of having metabolized and integrated that reality about the world.
Yeah.
It actually says something profound about human nature.
So one of the kind of calls to action beyond seeing the AI doc, the midterms are coming up, voting for policies in AI, and joining the human movement, humanmovement.org, is that you need to keep this topic in your mind as like this thing still matters every day.
It doesn't mean that everyone has to drop their life and they're already full and the world's overwhelming and you have to become an AI activist or something like that.
but it does mean that you need to keep this in your field.
Like one way you can do that is start a WhatsApp group with your friends.
Most people already have this where they have a WhatsApp or signal group and they just share updates about what AI is happening in AI and what we can do about it.
If you go to the human movement.org, there will be, you know, action groups and things that people can do there for actually taking action on this that are not just passively sharing news links, but like, what are we going to do about it?
But I think one of the ways you come that we're going to make our way through this is we have to combat the rubber band effect, which means like, you know,
continuing to listen to your podcast and the AI Risk Network and Your Undivided Attention, our podcast.
Keep this topic in your field, stay agentic, and if we don't keep it in the center of our attention in some way, if we don't participate in being part of the global cultural immune system to the anti-human future, then we won't make the right choice.
And I do think it's possible.
It's a very hard moment.
But I also find that because the time window to act is so small, because of this intelligence curse, because we only have...
The next 12 to 24 months to kind of be locking in the political power of people before we won't have that political power.
There's a kind of inspired urgency that I actually feel when I'm in rooms with people.
Everyone's like, let's go.