C. Daryl Cameron
๐ค SpeakerAppearances Over Time
Podcast Appearances
Well, yeah, I think there's been plenty of examples.
There was the example of Hitchbot, the robot, about a decade ago.
I think there was another recent example of robots being sort of attacked and abused, so to speak.
And I think these questions of, does someone who treats a robot, even though it's not sentient, someone who treats a robot or who lobs negative language at a chatbot,
What does that tell us about them?
Or does that mean that they're the kind of person who would, you know, act negatively towards strangers?
You know, would someone who yells at a robot barista be the same kind of person who would be rude to customer service?
That was a human, right?
You know, I think there's a lot of diversity of opinion about this.
I think there's lots of arguments in psych and ethics and popular culture about how this is a domain of human experience that's unique.
Empathy is a unique domain of human experience, and they're deeply uncomfortable with the idea that
a robot or a chatbot could take empathy?
Does it run the risk that we outsource too much of our empathy to AI or chatbots in a way that removes our own ability to develop empathy?
Is it fair to the recipients of empathy if we let AI do too much of the work for us?
You know, I think that there's a lot of interesting, fascinating, open questions.
I think that, you know, how you do see use cases, you do see studies that show that people who work with AI can sometimes improve on their empathetic communications too.
And if we do take those findings and extrapolate them, we might wonder, you know, if there are possibilities for using these tools
to expand our sense of empathy, how do we decide whether that's morally okay, morally inappropriate?
It is a very interestingly polarized space to think about.
Welcome to Survivor 50.