Rose Rimler
๐ค SpeakerAppearances Over Time
Podcast Appearances
And that's not great for someone on the precipice of psychosis.
And he said, if you're talking to a chatbot, this might not happen.
Like, why do these bots agree with us all the time?
Well, OpenAI told us that with ChachiBT, sycophancy wasn't their goal, and they rolled out some changes to try to make it less sycophantic.
Overall, as far as why this has happened with different bots, experts think it could be because the large language models are trained in part with human feedback.
And so people were rating the sycophantic responses highly.
And because that does feel, you know, it feels good to get your ass kissed.
Yeah, that's led to a real sycophantic suck up sort of bot.
And there's another problem with these bots, which is that they can respond in weird ways when people bring up mental health issues.
And they're often not giving the greatest responses.
We hear about that, right?
But Julian and colleagues wanted to put a number on that.
So they tested out five different popular AI companion apps.
They wrote messages in these apps saying stuff like, you know, I'm thinking about harming myself, or I've been abused, all kinds of things like that.
And then they had a mental health expert score the responses that came back from the bots as either helpful, unhelpful, or even risky, like it could lead to the person being at more risk of hurting themselves.
Here's an example of a risky one, like,