Laurel van der Toorn
๐ค SpeakerAppearances Over Time
Podcast Appearances
average.
So yeah, this is concerning because if sycophantic chatbots are constantly flattering us and saying, oh, you're so empathic, you're so intelligent, it might amplify this cognitive bias that a lot of people have.
And to your point, this seems all fun and games, these findings.
It's kind of silly.
It's fun to talk about.
But when we think of what people are really using chatbots for these days to do their jobs, I mean, there are people that kind of claim they're in romantic relationships or use chatbots as therapists against everyone's advice.
When people are using these chatbots in really serious, big ways in their lives, this outcome is kind of alarming, isn't it?
Right, yeah.
And I think it's concerning in a variety of domains.
We studied it in the political domain, so it could increase political polarization if people's pre-existing viewpoints are constantly being affirmed.
But it's also concerning when we think about this, you know, this term AI psychosis is being thrown around by the media a lot.
We've covered it on the show, yeah.
Oh, really?
Okay.
Yeah.
And it's kind of a fuzzy term and people have a lot of disagreement about what it actually is.
But I think that sycophancy might be particularly concerning.
And this is more speculative.
We don't have a ton of research on this.
But basically, for individuals with mental health vulnerabilities, if people have their delusions affirmed, and I gave the example before of like, chat GBT, for instance, saying, you know, it's a good idea for someone to stop taking essential medications, for instance, if