James Evans
๐ค SpeakerAppearances Over Time
Podcast Appearances
Exactly.
And we see, for example, in a study that we did recently, there's a conversation, for example, if you are faced with fake news, you know, with headlines that actually occur in the world and are denoted as kind of like, you know, fake misrepresentations of the truth versus those that are true.
You haven't seen them before.
The model that you're working with hasn't seen them before.
Right.
If, um, if you make your determination and then you chat with a chat bot, like a standard, you know, neutralized kind of seemingly or performatively objective chat bot, then, um, uh, then you feel better about your performance after you update it about whether or not the thing is true, but you perform worse, right?
And then one of the things we do is we just make the models more biased, like biased away from you or towards you.
And all of a sudden you get uncertain when you're chatting with the bot.
You chat much more with it.
You perform better, but you feel worse about it and you feel like your interaction with the bot was less meaningful.
So we're averse to conflict and we don't feel comforted by it, even though it makes us perform better in our lives.
It makes us perform better decisions.
So I think this is the question.
Where is it a benefit?
Where does it cross over the line and become a harm?
So if someone, an older person who's lost family, doesn't have other connection and feels heard and is able to express themselves in ways that keep them sharp and that retain their memory and that exercise their memory and their life, that seems like a good.
Absolutely.
On the other hand, if someone is making decisions
critical life decisions in conversation with a psychophantic chatbot, which is tuned to emotional and cognitive resonance, rather than their long-term life outcomes and flourishing, then that's deeply problematic.
So I think that the question is like, where is that line?