Azeem Azhar
๐ค SpeakerAppearances Over Time
Podcast Appearances
I mean, some people might say that the process of a biological system going through its own set of selection pressures and then individual survival pressures is a very, very particular path
that determines how an organism or an agent is successful or not successful.
And then you might argue that, well, because silicon-based systems like these models have a different path, they will look different.
But they still have their process of rewards and reinforcement learning.
They still have a sense that
certain models end up not making it out there.
And what we are starting to see persuasively to end users, but perhaps not to the consciousness scientists, is models claiming through their outputs to have a sense of suffering, right?
To have a sense of ennui or boredom or fear.
When you package all those things together, how do we know that we're not on that trajectory to something that might actually meet your criteria for consciousness?
Yes.
And, you know, it's interesting.
It's so difficult to avoid because in a way, consciousness is still a contested definition by the philosophers.
You know, we've got mutual friends, Anil Seth being one, I'm sure you know, David Chalmers as well.
And, you know, the best academics in the field are still
debating this, but it's such a helpful shorthand, even in your response to me, you talked about these digital silicon beings and a being in a sense that I know exactly why and how you use that word, but it becomes so easy for it to elide its way into our vocabulary.
What I thought was really powerful about your August 2025 essay, Seemingly Conscious AI, was that you said, look, we can sidestep the
scientific or the philosophical definitions for the moment.
And we can focus on this idea of seemingly conscious AI because of the risks that you determine.
And I think this fundamental idea that we've built our societies over around an idea of consciousness and an idea of the ability to suffer and the ladder of rights and responsibilities that go with it,
When we bring ourselves to where we are today at the beginning of 2026, the way this is manifesting itself in a way is what you call AI psychosis risk, right?