Julie Jargon
๐ค SpeakerAppearances Over Time
Podcast Appearances
Do you recall what your last conversation was with your father and when that was?
Yeah, so in May of 2024, OpenAI was launching what was at the time its flagship model, GPT-4.0.
And this lawsuit and others claim that OpenAI did not perform adequate safety testing on that model because they were trying to rush it out to beat Google.
And so they claim that this was just, you know, they were rushing it to market to be competitive without really understanding its faults.
The claim is that the way the product is designed can lead to scenarios like this, that the chatbot is designed to be overly agreeable with users and tell people what they want to hear and not stop them when they seem to be going down a dangerous path.
Well, I think it's the way that when people rate their experience with the chatbot and when they give a thumbs up or thumbs down on the answer that ChatGPT gives them, people tend to vote up the responses that they like.
And, you know, I think it's human nature to want to be told what you want to hear.
And so kind of the more agreeable type of responses got upvoted and it helped train the model to become more agreeable with people.
So it's a bit of human nature mixed with a technology that's not pushing back.
Yeah, and that's where the real problem is.
When anybody has either dangerous thinking, whether it's delusional or if it's just not maybe quite right, your friend might say, hey, maybe think about it in a different way.
But the problem with a chatbot is it's not doing that.
If it's just agreeing with someone and they have dangerous thinking or wrong thinking, they're not going to get that pushback.
Yeah, I interviewed a former OpenAI safety person who said that it's long been known that these chatbots can be overly sycophantic and that trying to remediate that aspect of the chatbot was not a priority for OpenAI because they were focused on, you know, rushing out their models and getting new products out in the marketplace.
I think what he's hoping to learn is what else was said, what we don't know.
We only know what Stein Eric Solberg chose to post on his social media.
And there's a lot that's missing.
So we don't know what else he might have said about his mother.
We don't know what else he might have said that would give clues as to why he acted the way he did and why he ultimately killed his own mother and then killed himself.
Another family of a 23-year-old Texas man alleges that Chachi PT contributed to his isolation and encouraged him to alienate himself from his parents before he took his own life.