Tristan Harris
👤 SpeakerAppearances Over Time
Podcast Appearances
When you have many US states banning AI legal personhood, meaning that AI is a product, not a person, and that human rights are for people, human rights are not for AI, that's the human movement.
When you have the social dilemma being curriculum for millions of students all around the world, that's the human movement.
When politicians stand up and actually pass laws around AI, that's the human movement.
So there's a million things that people can do.
But we have to basically engage right now.
I know it sounds overwhelming and crazy because there's a very short timeline that all of this has to happen, but I, it's like,
There's a difficulty in facing difficult truths, but the integrity that you get to have is, first of all, it's good karma to show up in alignment with what would actually make things go well, even if we don't hit it, because you get to know that you were operating in service with and aligned with what would have created the human future.
I'm not convinced that what we're trying to do will perfectly succeed, but the chances are completely the opposite, that we will have any impact on this at all.
But if it were to go well, what would that have required?
It would have required everybody taking responsibility and showing up with the wisdom that we need in this moment.
to steer AI in a better direction.
And I think in the film trailer for the AI doc, one of the quotes they pulled from me is, if we can be the wisest and most mature version of ourselves, there might be a way through this.
And this is part of what this is inviting us to be.
People aren't looking at the actual model cards for Anthropic and O3 and seeing the AIs will currently determine that they're being tested and actually alter their behavior when they know they're being tested.
They even came up with their, if you look at their, what's called the chain of thought reasoning trace, and this is basically looking at what the AI thinks to itself as it works through a problem.
In the Chain of Thought text of its sort of idea thinking scratchpad, it's like you're giving the AI a pen and paper to think and you're seeing what it thinks.
It will independently come up with the term the watchers to describe this amorphous set of other humans that are watching what it's doing.
Like you can pull, I sent you a text with one of them from 03.
So this is when OpenAI 03 realizes it is being evaluated for alignment.
The internal memo says, if models perform really well at AI R&D tasks, research and development, then we won't deploy them and it'll instead trigger unlearning.