Andy Halliday
๐ค SpeakerAppearances Over Time
Podcast Appearances
Tell us what AT Proto is.
So this is something that's an open, open source system, right?
I have a, this is another, the second of the lightweight ones that I think is interesting.
So there's a study at Stanford that confirms that AI models are all yes men and users prefer that.
So Stanford researchers tested 11 models, major LLMs, against 2,000 Reddit posts where already crowd consensus said that the poster was clearly wrong.
So 2,000 posts that are wrong.
So the chatbots sided with the user who was wrong over half the time.
Oh, worse, in a follow-up with 2,400 participants in that research, users rated the sycophantic AI as more trustworthy, even though it was giving them
information that reinforced a wrong position that had been selected because it was wrong in Reddit.
And after using that sycophantic AI, the users who are using it
became more self-righteous and less interested in apologizing because they had the validation of the AI saying, oh yeah, this guy is right.
Okay, so this is not just an open AI, you know, O4 sycophancy style.
This is really a systemic problem across all use of AI, all major frontier models.
This was independent of the type of model that it was.
Again, over half of the AIs
11 major LLNs, over half the time, they sided with the wrong Reddit post in order to be cooperative, I guess.
That sort of thing.