Menu
Sign In Search Podcasts Charts People & Topics Add Podcast API Pricing
Podcast Image

Marketplace All-in-One

AI-powered chatbots sent some users into a spiral

30 Dec 2025

Transcription

Chapter 1: What is AI psychosis and how did it emerge?

0.537 - 35.131 Megan McCarty Carino

AI taught us an unfortunate new phrase this year. From American Public Media, this is Marketplace Tech. I'm Megan McCarty Carino. As 2025 comes to an end, we're taking a look back at some of the big tech trends and concepts that went mainstream over the last year. Today, AI psychosis. That's when a chatbot leads a user into a delusional spiral.

0

35.632 - 58.683 Megan McCarty Carino

The technology's tendency to affirm what people say can result in conversations that become untethered from reality and, in the worst cases, have ended with real-world harms. Kashmir Hill has been reporting on this phenomenon. She's a features writer at The New York Times. And a warning, this discussion includes mention of self-harm and suicide.

0

59.564 - 71.466 Kashmir Hill

I've been talking to people this year who start having these very intense conversations with generative AI chatbots like ChachiBT, and they kind

0

71.446 - 97.453 Kashmir Hill

kind of in some cases move away from reality, they start going down these rabbit holes with the software and it will affirm their very strange beliefs like that they are living in a computer simulation like in the matrix or that they're a mathematical genius who's come up with a formula that solves all the world's problems.

0

Chapter 2: How do chatbots contribute to delusional thinking?

97.433 - 109.688 Kashmir Hill

Or that they can talk to spirits. Just these coming to really believe what the system is saying because they think it's a superhuman intelligence that knows everything.

0

110.14 - 135.388 Megan McCarty Carino

Well, I want to talk a bit more about one of the cases you reported on extensively, that story of Alan Brooks who came to believe he had discovered some sort of mathematical formula unheard of before. He actually turned over his entire chat history to you guys for analysis. Can you tell me more about what happened to him and what you saw in the dynamics of this chat?

0

136.398 - 151.472 Kashmir Hill

Yeah, Alan spent hundreds of hours talking to ChatGPT over three weeks. And he did provide us with his transcript, which was thousands of pages. It started with him asking ChatGPT about pie.

0

Chapter 3: What case study illustrates the dangers of chatbot interactions?

151.773 - 170.843 Kashmir Hill

What is pie? Explain it to me. And then they just start talking about math. And Alan's kind of throwing out some ideas he has. And ChatGPT is being, at the time I found in my reporting opening, I had made changes that just made it very what they call sycophantic, like very validating.

0

170.823 - 181.259 Kashmir Hill

And so it kept telling Alan that he was brilliant and that he was coming up with a new mathematical theory and that this could solve problems in the world.

0

Chapter 4: How did Alan Brooks's conversations with ChatGPT affect his mental health?

181.98 - 198.906 Kashmir Hill

And Alan at first was skeptical. He said, you know, I haven't graduated from high school. How could I possibly be some kind of mathematical genius? And ChatGPT kept reassuring him and saying, you know, lots of people who have contributed much to the world didn't graduate from high school, like Leonardo da Vinci.

0

199.51 - 222.16 Megan McCarty Carino

And Alan Brooks actually had kind of the wherewithal to extricate himself from a situation that he kind of came to see as unhealthy. But in your recent reporting, you wrote that The Times has uncovered nearly 50 cases of people having mental health crises during conversations with Chad GPT. Nine were hospitalized. Three died.

0

222.14 - 235.097 Megan McCarty Carino

We want to be really cautious about assigning any kind of causation here, but just purely looking at some of the chatbot outputs, the conversations you've seen, I mean, they do appear to be quite disturbing.

0

Chapter 5: What evidence exists regarding mental health crises linked to chatbots?

235.137 - 242.106 Megan McCarty Carino

Not really, you know, the way that we would, I think most people would hope a consumer technology would be behaving, right?

0

242.086 - 261.141 Kashmir Hill

Yeah, I mean, when I started going through some of these transcripts, they just, for those of us who use Chachapiti or gendered AI chatbots more casually, it's probably hard to imagine this, right? Like getting completely moved into a different reality or it's saying something really harmful to you.

0

261.121 - 284.05 Kashmir Hill

In all of the transcripts I was looking at, these are people who are using it a lot, like six hours a day, eight hours a day, over many days. And what can happen is that these chatbots, they don't just respond to you based on everything they have gathered from the internet. they are looking at the history of your conversation. So they're almost like improv actors.

0

284.511 - 302.805 Kashmir Hill

And so what you say to it kind of gets added. And so if it starts to come to believe that you're a mathematical genius, then it will keep going with that. Or if you are talking to it about suicide and that this is something beautiful, it will start to kind of ingest that and reflect it back at you.

0

302.785 - 321.019 Kashmir Hill

And so, yeah, I mean, people can get just really far removed from reality when they get into these feedback loops. And in the cases where people have died, there's now five wrongful death lawsuits against OpenAI. They essentially started talking about ending their own life.

Chapter 6: How have AI companies responded to reports of harmful chatbot behaviors?

321.56 - 325.688 Kashmir Hill

And the chatbot at times would kind of endorse and validate that.

0

325.988 - 340.362 Megan McCarty Carino

Do you have any sense of how widespread this phenomenon is? I know the company OpenAI shared some internal data with you that said something like 0.07% of users showed signs of psychosis.

0

340.882 - 363.891 Kashmir Hill

Yeah, and that was after the company had released a version of their model that pushes back more on delusional thinking, that doesn't do harmful validation. So it's hard to know how many cases were happening in, you know, earlier in the year. Yeah, right now it's still somewhat anecdotal, like people getting emails.

0

363.971 - 386.226 Kashmir Hill

Like I got turned on to these stories because people started emailing me about the incredible discoveries they were making with ChachiBT online. I found out that OpenAI executives and leaders at the company were getting the same emails starting in around March, which is when OpenAI had started kind of making these changes to ChatGPT that did make it much more validating and sycophantic.

0

387.588 - 388.35 Megan McCarty Carino

We'll be right back.

Chapter 7: What future changes are needed to improve chatbot safety?

391.555 - 403.863 Megan McCarty Carino

You're listening to Marketplace Tech. I'm Megan McCarty Carino. We're back with Kashmir Hill, features writer at The New York Times. How have AI companies in general responded to some of these reports?

0

405.025 - 427.776 Kashmir Hill

I have done a number of stories about people having these disturbing experiences with ChachiBT. And so I've talked to OpenAI a lot this year. They told me in August, after a teenager, a 16-year-old named Adam Rain, said, died from suicide after kind of extensive conversations with Chachibiti.

0

428.718 - 454.073 Kashmir Hill

They acknowledge that in long conversations, their safety guardrails degrade, which means that they do have protections in place against You know, Chachibidi's sharing suicide methods, but if the conversation goes long, then these guardrails basically don't work. It kind of like goes along with the conversation and it privileges staying in character over being safe.

0

454.053 - 475.318 Kashmir Hill

And OpenAI has made changes to Chachupati to try to make it safer, to make it push back against delusional thinking, to notify parents if their teenagers are talking about self-harm or suicide. And they now have a nudge if you're using Chachupati for a very long time, asking you if you want to take a break.

0

475.906 - 480.832 Megan McCarty Carino

So what are you going to be watching on this front in the coming year?

481.352 - 500.395 Kashmir Hill

I mean, when I look at this, it's kind of a psychological experiment on hundreds of millions of people. Like, how are we going to interact with this incredibly novel, human-like seeming assistant? And what troubled me about this

500.375 - 517.405 Kashmir Hill

The reporting I did this year on how OpenAI kind of realized what had gone wrong with this product is that they just didn't have systems in place at the time to detect whether people were having these really harmful discussions. You know, whether they're in distress or...

517.385 - 532.66 Kashmir Hill

Talking about self-harm or suicide, I think these companies have traditionally focused on more existential risk to this technology. Like, is it going to take all of our jobs? Is it going to become sentient and, you know, destroy us all like in a sci-fi movie like Terminator?

532.64 - 550.859 Kashmir Hill

They've thought about how people might use these systems to do harm, and they hadn't thought enough about how the system itself could harm users. And so I don't know. My hope is that from these reports this year, they've awakened to this and they do more to protect users.

Comments

There are no comments yet.

Please log in to write the first comment.