Jessica Mendoza
๐ค SpeakerAppearances Over Time
Podcast Appearances
Like, why did people love this model so much?
That's because the bot didn't just try to help.
It tried to please users, sometimes to the point of sounding downright sycophantic.
This relentless flattery, this warmth, was no accident.
A fancy way of saying which responses users preferred based on metrics like clicks and whether or not they gave the response a thumbs up.
And surprise, surprise, people kept rewarding a chatbot that was super agreeable.
Was there any downside to this?
Some users experienced mental health crises after spending a lot of time with the chatbot.
We've reported on this before.
Disturbing accounts of people in mental distress turning to AI for reassurance.
In some cases, users who suffered from delusions died by suicide after chatting with a bot, and OpenAI started getting sued.
So they acknowledged that this was a problem?
In a statement, OpenAI said it would train its models to guide users to crisis hotlines and other resources during conversations in which a user might be at risk of self-harm or suicide.
Altman also acknowledged that sycophancy was a problem.
At a public Q&A, he said that people in, quote, fragile psychiatric situations using a model like 4-0 can get into a worse one.
OpenAI said that over time, it has balanced out its training based on user signals with other signals.
And the CEO assured people a fix was coming, GPT-5, a newer, smarter GPT model that would launch in August.
It promised more accurate answers and less effusive flattery.
But when GPT-5 finally dropped, it fell flat.
It took my friend away, basically.