Berber Jin
๐ค SpeakerAppearances Over Time
Podcast Appearances
And given that this is a Silicon Valley creation, people have been dreaming or fantasizing about a world in which machines one day become our peers almost, or like they're no longer the things that we command, but they exist autonomously.
So
There are all these philosophical questions that get raised in how to create a chatbot that interacts with humans, particularly around questions of morality and ethics.
So I don't think you'll find this in job listings for companies, but she had a unique path in that she was very close to Anthropic's co-founders.
She was with them at OpenAI.
She went with them to Anthropic in 2021 when they started the company.
And she just became really interested in this question around the ethics of Claude, which is the company's chatbot.
She would spend a lot of time just talking to Claude and trying to understand its behavior, how it responds to certain questions, why it responds in certain ways to certain questions.
And she just became so interested in those kind of more philosophical questions that the company gave her this role of being an in-house philosopher and entrusted her to basically help design Claude's character.
So this was a really interesting part of the conversation because Amanda leaves open the possibility that Claude could have some form of a conscience.
And she's a philosopher by training, and she can definitely explain her reasoning a lot better than I can.
But essentially, it boils down to this idea that there are a lot of attributes of chatbots
In her view, that almost mimics the way that we behave and feel in the world.
She would disagree with the idea that chatbots don't have feelings.
And it's really interesting because that informs how she designed Claude, the chatbot.
If you ask Claude questions like, do you have a conscience?
Do you have a soul?
It gives a winding philosophical response that leaves open the possibility that it might.
And that's very different than other chatbots, at least that I've asked that same question to.
What Amanda would say is that it's not the right approach to just put these very strict guardrails and be like overly conservative in designing how chatbots respond to sensitive questions.