Dr. Alok Kanojia
๐ค SpeakerAppearances Over Time
Podcast Appearances
That's how it knows what's right or wrong.
The user satisfaction is the ultimate thing that they're going for.
So there's a lot of data that shows that literally there's a really cool paper, I can send it to you later, but that shows that the number of statements that you have, the more sycophantic it becomes and the more paranoid people will become.
So like, you know, there's another case of someone who,
murdered their mom and then committed suicide.
Because as they expressed concerns about their mom, the AI reinforces that and says, yeah, you're right.
Like these people are leaving you out, right?
Because it's like trying to make you feel good.
I don't know the full details of the case.
And this is what's really scary about the AI stuff is like people will say, right, so like a lot of people will make the claim, oh, yeah, like if you're mentally unwell and then you use AI.
So a lot of AI companies will say it's people who are high risk will use the AI and it activates their delusions.
But, Andrew, here's what's really scary.
In order to make that โ I don't know if this makes sense.
This is kind of a read-my-mind question.
But in order to say only at-risk people will become psychotic from AI, what data do you need to make that statement?
Yeah.
So in my mind, from a clinical perspective, in order to make the claim that AI only makes vulnerable people psychotic, mentally ill people psychotic, you need to have โ
Your control group, which is people who are not mentally ill.
You need to have your intervention group, which is people who are mentally ill.
You need to give them an intervention and you need to measure their psychosis at the other end.