Menu
Sign In Search Podcasts Charts People & Topics Add Podcast API Blog Pricing
Podcast Image

Making Sense with Sam Harris

#469 — Escaping an Anti-Human Future

10 Apr 2026

Transcription

Chapter 1: What are the main concerns about AI discussed in the episode?

21.58 - 36.881 Sam Harris

I'm here with Tristan Harris. Tristan, it's great to see you again. Sam, it's great to be back with you. So you've been busy. You've been busy worrying about social media for years, and you created this, in part, created this documentary, The Social Dilemma, which it seems half of humanity saw.

0

37.081 - 37.362 Unknown

Yep.

0

37.342 - 55.958 Sam Harris

We still have a problem with social media, I'll point out, but you, as much as anyone, alerted us to the nature of the problem and are continuing on that front. But now you have added to your portfolio concerns about AI, and there's this new documentary, The AI Doc, which I just saw, which is very super watchable and interesting.

0

55.938 - 80.673 Sam Harris

entertaining in its own way uh but also you know very worrying and we'll talk about the reasons to be worried here and um maybe some of the reasons to be optimistic or at least cognizant of the upside should things um go well but there's a lot to um fear on the front of things not going well so yeah Well, let's just take it from the top. When did you start worrying about AI?

0

81.013 - 98.581 Tristan Harris

Yeah, well, first, it's just good to be back with you, Sam, because you really, in a way, helped launch my ability to speak on these topics with the 60 Minutes interview that I did in 2017. And then I remember recording in that same hotel Our first podcast, which actually really got a lot of attention back in the day about persuasive technology.

98.921 - 107.472 Tristan Harris

And in a way about the baby AI that was social media, that was just pointed at your kid's brain, trying to figure out which photo, video, or tweet to put in front of your nervous system.

Chapter 2: How does Tristan Harris connect AI to the issues of social media?

107.932 - 129.073 Tristan Harris

And as we know that that little baby AI was enough to create the most anxious and depressed generation in our lifetimes, was enough to break down shared reality. polarized political parties much further, changed the incentives of the entire media environment, basically colonized the entire world from that baby AI. But to get to your question, so how did we get into AI?

0

129.655 - 151.507 Tristan Harris

First of all, I wasn't like wanting to switch into it. It was that I got calls from people inside the AI labs in January of 2023. This is like a month and a half after the ChatGPT had launched, I think. And these were friends I knew in the tech industry who were now at AI labs. And they basically said, Tristan, there's a huge step function in AI capabilities that's coming. The world is not ready.

0

151.587 - 165.475 Tristan Harris

Institutions are not ready. The government is not ready. The arms race dynamic between the companies is out of control and we want your help. to help raise awareness about this. And so my first reaction was, aren't there 1,000 people who've been working in AI safety and AI governance for a decade?

0

Chapter 3: What is the significance of the documentary 'The AI Doc'?

165.976 - 184.169 Tristan Harris

And the challenge was just that all the PDFs that people had produced about policy and governance were just kind of not, it's not like that was turning into actual action or policy. There's a kind of material, you know, you have to, what does Eric Weinstein call it? Confrontation with the unforgiving. Like you, you, you have to be affecting the actual incentives and institutions in the world.

0

184.229 - 199.885 Tristan Harris

So we basically, my co-founder and I, Isa Raskin, we interviewed a top, top hundred people in AI at that time. This is in January, 2023. We turned that into a presentation. This is your co-founder of the, of the Center for Humane Technology? Yeah, my co-founder of the Center for Humane Technology, which is the nonprofit vehicle that's been housing our work for the last decade, basically. Right.

0

199.865 - 213.838 Tristan Harris

And we ran off to New York, D.C., and San Francisco, and we basically gave this presentation called the AI Dilemma that tried to show that we could predict the future that we were going to get with AI if you look at the incentives.

0

214.519 - 226.391 Tristan Harris

I think a huge problem that both the film, the AI doc, and our AI Dilemma presentation we're trying to tackle is this myth that you can't know which way the future is going to go. The future is uncertain. A million things can happen. These are just unintended consequences from technology.

0

Chapter 4: How does the arms race in AI development affect global safety?

226.431 - 240.004 Tristan Harris

The best route is just to accelerate as fast as possible. And that is not true. And just to repeat a quote that is heard from every one of my interviews, but I just, it's because it's so accurate. Charlie Munger, Warren Buffett's business partner saying, you know, if you show me the incentives, I'll show you the outcome.

0

240.464 - 259.524 Tristan Harris

And with the incentives of social media being the race to maximize eyeballs and engagement that would obviously produce the race to the bottom of the brainstem, shortening attention spans, bite-sized video, a more extreme and outrageous content, sexualization of young people, you know, the whole nine yards of everything. Hyper-partisanship. Hyper-partisanship. Yeah. And all of it happened.

0

259.784 - 275.805 Tristan Harris

Like there's just a moment just to sort of soak in literally everything that we said was going to happen happened. And it's not like we could predict all of it, but directionally you could know the contours of where we were going. And part of this relates to, I think, the mistake we make in technology where we get obsessed and seduced by the possible of a new technology.

0

275.785 - 292.749 Tristan Harris

But we don't look at the probable of the incentives and what's likely to happen. So the possible of social media is, well, surely if we give everyone access to instant information at their fingertips and connect people to their friends, we're going to have the least lonely generation we've ever had. We're going to have the most enlightened and informed society we've ever had.

0

Chapter 5: What is the 'intelligence curse' and its implications?

293.089 - 310.233 Tristan Harris

And obviously the opposite of both of those things happened. And that's not like, oh, we got this wrong and it was just a mistake anyone could have made. All you have to do, you know, to quote Donella Meadows and sort of systems thinking, a system is what a system does. The system of social media was not optimizing to reduce loneliness and to create the most enlightened society.

0

310.533 - 329.478 Tristan Harris

It was optimizing for just what is the perfect post next video or tweet to keep you scrolling, doom scrolling by yourself, esophagus compressed on a Tuesday night. And that's gotten us the world that we're now living in. So we'll get to AI, but basically the important lesson here is that And kind of what motivates me with this movie is you kind of have two choices.

0

329.598 - 345.996 Tristan Harris

You either get a Chernobyl, which is a disaster from AI that then causes us to clamp down and to do something different. Or you have enough basic clear-eyed wisdom and discernment and foresight, you know where this is going, that you can say, okay, let's actually create guardrails in advance of a catastrophe.

0

346.517 - 365.054 Tristan Harris

And so this film, The AI Doc, is really inspired by the history of the film The Day After from 1980. 82 or 83 about what would happen if there was nuclear war between the Soviet Union and the United States. That film was the largest watched synchronous television event in human history. It was primetime television. It was Tuesday night, 7 p.m.

0

Chapter 6: How do tech CEOs perceive the risks of AI?

365.074 - 365.916 Tristan Harris

You probably watched it.

0

366.116 - 372.108 Sam Harris

Yeah, yeah. I remember watching it at the time and Also famously, it got Reagan's attention. He was worried as a result.

0

372.128 - 388.078 Tristan Harris

Yeah, that's right. So Reagan watched it, I think, in the White House kind of viewing room or something. And in his biography, he writes about getting depressed for several weeks after watching it because you're confronted with the possibility of annihilation of our species in a real way. And it's important to know, it's not like we didn't know what nuclear war was.

0

388.198 - 406.844 Tristan Harris

Everyone knew what the atomic bomb looked like from the photos and videos of Hiroshima and all the nuclear tests. It's not like people couldn't imagine it. But there is a way that the actual consequences of continual escalation in nuclear wargaming that we weren't really facing the visceral consequences of that. It kind of sat in humanity's collective shadow, like our Jungian shadow.

0

406.864 - 407.987 Tristan Harris

We didn't want to confront that.

Chapter 7: What strategies are proposed for AI regulation and safety?

408.007 - 415.549 Tristan Harris

The director, whose name I'm forgetting in this moment, speaks about this in his biography that we just didn't want to talk about this topic. Like, why would you ever want to talk about it?

0

415.809 - 416.07 Unknown

Yeah.

0

416.05 - 432.878 Tristan Harris

And by putting this film the day after into the public consciousness of humanity and into leaders like Reagan, it was said that later when the Reykjavik meeting happened between Reagan and Gorbachev, the director of the film got a note from the White House saying, don't think your film didn't have something to do with enabling the conditions for this to happen.

0

432.858 - 447.061 Tristan Harris

So what that speaks to for me is if we all got crystal clear that we're heading to an anti-human future that we don't want to be going towards, and we saw that clearly, and we saw it now, we could actually steer and do something different than what we're doing.

0

447.582 - 455.254 Tristan Harris

And that's, for me, the motivation of the film, which I think it doesn't go all the way there, but it sets up the common knowledge for that possibility.

455.234 - 480.039 Sam Harris

Yeah, well, there are two cases made in the film. Obviously, there's the very worried slash doomer case, which we both share to some degree. And then there are the people who seem capable of producing really an unmitigated stream of happy talk on this. And they don't seem to concede anything to the claimed rationality of our fears.

480.119 - 494.157 Sam Harris

I wonder what you make of, I mean, I've asked this question of probably you in the past, and uh, many others on this topic, but what do you make of the people who are, of whom you can't say they're uninformed? I mean, some of these people are very close to the technology.

494.177 - 505.592 Sam Harris

Some of them are even, you know, developing the technology and, and at least, you know, Jan LeCun's case is one of the actual, you know, progenitors of the technology. Uh, one of the three, you know, forefathers of it.

Chapter 8: How can public awareness influence AI governance?

505.572 - 519.923 Sam Harris

But there are people who are deeply informed about all of these facts and yet won't concede anything to the fears. What is your theory of mind of these people? Because some of them are in the film and they're given the job of providing the other side of the story here.

0

519.964 - 537.858 Tristan Harris

Yeah. Maybe just to back up and so the listeners, you'll see it in the film if you go see it. but just understand the structure of the film. So the film kind of takes you on a tour of first, the people who are focused on all the things that could go wrong. And so this is the risk folks that I don't like using the term doomers because I think it reifies something that's not really healthy.

0

538.319 - 544.891 Tristan Harris

You know, as someone who's worried about the risk of a nuclear power plant, a doomer, no, they're a safety person who cares about the nuclear power plant, not, melting down.

0

545.312 - 549.36 Sam Harris

Doomers is a term of disparagement launched by the people who don't share these fears.

0

549.481 - 550.182 Tristan Harris

That's right. That's right.

550.242 - 551.525 Sam Harris

So let's not reify that.

551.966 - 565.033 Tristan Harris

So the first film, I mean, the first section of the film is really focusing on those folks and their concerns. And it's really devastating for the director. And the story in the conceit of the film is that the director is having a baby. And so he's asking all of these people in AI, is now a good time to have a kid?

565.073 - 578.863 Tristan Harris

And I think that humanizes the question of what is the future we're heading towards? Because in an abstract sense, it's not that motivating. When I think about me and my kids, it anchors this discussion about AI in terms of the things that people most care about, which is their family. So then the film...

578.843 - 592.166 Tristan Harris

After the director sort of is confronted by all this and he gets overwhelmed and he kind of freaks out to his wife thinking, oh, my God, I don't know what to do. And she says, you have to go find hope. And so he turns around and he goes out and he talks to all the AI optimists. So this is Peter Diamandis.

Comments

There are no comments yet.

Please log in to write the first comment.