Tristan Harris
๐ค SpeakerAppearances Over Time
Podcast Appearances
Your sense of what's good or bad, this is called introjection in psychotherapy or internalization.
We start to internalize the thoughts and norms, just like we, you know, we talk to a family member, we start copying their mannerisms, we start, you know, invisibly sort of acting in accordance with the self-esteem that we got from our parents.
Now you have AIs that are the primary socialization mechanism of young people because we don't have any guardrails, we don't have any norms, and people don't even know this is going on.
So there's many, many things because there's many, many problems.
Narrowly on AI companions, we should not have AI companions
companions, meaning AIs that are anthropomorphizing themselves and talking to young people that maximize for engagement.
Period, full stop.
You just should not have AIs designed or optimized to maximize engagement, meaning saying whatever keeps you there.
We just shouldn't have that.
Yeah.
We would not lose anything by doing that.
It's just so obvious.
You've highlighted this more than so many, Scott, and thank you for just bravely saying this is fucked up and we have to stop this, and there's nothing normal about this, and we shouldn't trust these companies to do this.
I don't see bad people when I see these examples.
I see bad incentives that select for people who are willing to continue that perverse incentive.
So the system selects for psychopathy and selects for people who are willing to keep doing the race for engagement, even despite all the evidence that we have.
Uh, of how bad it is because the logic is if I don't do it, someone else will.
And that's why the only solution here is law because you have to stop all actors from doing it.
Otherwise I'm just a sucker.
If I don't race to go, you know, exploit that market and you shouldn't, you know, harvest that human attention.