Menu
Sign In Search Podcasts Charts People & Topics Add Podcast API Blog Pricing
Podcast Image

Azeem Azhar's Exponential View

Mustafa Suleyman — AI is hacking our empathy circuits

05 Feb 2026

Transcription

Chapter 1: What insights does Mustafa Suleyman share about seemingly conscious AI?

0.031 - 24.093 Azeem Azhar

Today, I'm welcoming Mustafa Suleiman, the CEO of Microsoft AI, the founder of Inflection AI, and the co-founder of DeepMind. And for the past few months, he has been sounding an alarm about artificial intelligence, about the way some AI systems are being developed, and about why that particular trajectory has little to offer, perhaps but woe and worry. Let's get started. Welcome, Mustafa.

0

24.113 - 37.632 Azeem Azhar

It's great to see you. It's been a long time. Yeah, it's been a while. Thanks for having me. I'm excited for this conversation. You and I have spent a lot of time thinking about some similar things and we agree on a lot of them, but that's really boring for all of those people who are listening.

0

Chapter 2: How does consciousness relate to suffering and rights?

37.898 - 62.522 Azeem Azhar

Let's maybe lay out where I think we agree, and then we'll get to a sort of a knotty space. We're in this weird time. The world is changing because of technology, and many of the fictions that we've used to coordinate human behavior are under strain. By fictions, I mean the shared stories that allow us to cooperate from money and nations and corporations and credentials and jobs.

0

62.502 - 80.173 Azeem Azhar

And the way we perceive the world is also changing. People have traditionally operated with a scarcity OS, resources are limited, human intelligence was the bottleneck. But some of those assumptions no longer hold. Intelligence, mostly through AI, is becoming cheaper and more capable.

0

80.153 - 89.186 Azeem Azhar

You are part of that intelligence wave, that artificial intelligence wave, and you also believe the world is changing. You've called for a humanist superintelligence.

0

Chapter 3: What is the concept of a fourth class of being in AI?

89.226 - 109.857 Azeem Azhar

You've warned about the risk, the trajectory that takes us to AI psychosis if people believe AI is conscious when it's not. And I think we both agree that we need new operating principles for this new era. Let's get to that question of where it really gets interesting.

0

109.837 - 130.489 Azeem Azhar

You wrote this great essay back in the summer of 2025 about seemingly conscious AI, and you're worrying that as AI becomes more capable, more autonomous, and more embedded in our daily lives, people will start projecting consciousness onto it. They'll fall in love with it. They'll believe it's God. They'll advocate for its rights. They'll take its very bad advice at time to time.

0

131.03 - 137.7 Azeem Azhar

And you think this is dangerous, not just for individuals, but for society. So let's start there.

0

Chapter 4: Why do market forces push towards seemingly conscious AI?

137.68 - 149.117 Azeem Azhar

Jeff Hinton, he is the godfather of deep learning, a man you know very well. He's a Nobel laureate. He said that AI is conscious and that there really is a there there. Why do you think Jeff is wrong?

0

149.738 - 174.97 Mustafa Suleyman

You know, I think Jeff's got to a stage in his career where he can play the founding father contrarian role in order to provoke an important public conversation. You know, obviously, I massively admire and respect Jeff. I think he's incredible. We hired him as a contractor consultant at DeepMind back in 2011, along with his student at the time, Ilya Satskiva. So absolute legend of the field.

0

175.651 - 184.281 Mustafa Suleyman

My take on this question is that it's going to be very hard for us to precisely say whether it is or whether it isn't conscious.

0

Chapter 5: What are the potential dangers of open-source chatbots?

184.781 - 209.807 Mustafa Suleyman

And so we have to be very clear about the working definition that we're using for consciousness. And then we also have to be very clear about the mechanism inside these models that I think is quite fundamental to the definition. So first of all, the definition, many people intuitively think of this as self-awareness. Is the model able to describe its own experience in a persuasive way?

0

209.787 - 223.122 Mustafa Suleyman

And I don't think that is really a fundamental part of the right definition of consciousness. I think that's a bit of a misnomer. I think consciousness is inherently linked to the ability to suffer and to experience pain.

0

223.763 - 246.612 Mustafa Suleyman

And therefore, I think that there's very good reason to believe that for a long time to come, that will be contained to the human or the biological experience, let's say in general. Because we have a reward system, a learning system, which is inherently connected to the external world. And we, you know, learn likes and dislikes when our pain system is triggered.

0

Chapter 6: How should society approach the regulation of AI?

246.712 - 263.58 Mustafa Suleyman

And that's basically how we form representations, which we use for decision making from fight or flight all the way through to our prefrontal cortex. So I think that's a very, very important distinction. And I think it helps to set us apart from the silicon-based learning systems that we have today.

0

264.302 - 285.332 Azeem Azhar

I mean, some people might say that the process of a biological system going through its own set of selection pressures and then individual survival pressures is a very, very particular path that determines how an organism or an agent is successful or not successful.

0

285.372 - 305.759 Azeem Azhar

And then you might argue that, well, because silicon-based systems like these models have a different path, they will look different. But they still have their process of rewards and reinforcement learning. They still have a sense that certain models end up not making it out there.

0

306.26 - 318.579 Azeem Azhar

And what we are starting to see persuasively to end users, but perhaps not to the consciousness scientists, is models claiming through their outputs to have a sense of suffering, right?

0

Chapter 7: What is the counterintuitive case for accelerating AI development?

318.599 - 331.651 Azeem Azhar

To have a sense of ennui or boredom or fear. When you package all those things together, how do we know that we're not on that trajectory to something that might actually meet your criteria for consciousness?

0

332.011 - 345.972 Mustafa Suleyman

Well, first of all, they don't learn in the same way that humans learn. I mean, this is a bit of a misnomer. In neural network design, the inventors of these systems have taken inspiration from Pavlovian learning, reward learning, reinforcement learning.

0

345.952 - 366.364 Mustafa Suleyman

They've also taken inspiration from evolutionary methods for genetic algorithms and those kinds of things as the field of machine learning has explored lots of different paths. But that does not mean that the way that they're implemented today bears any resemblance to the way that humans evolve or humans learn. I think it's a very important distinction. The reward is set by the human programmer.

0

366.624 - 371.031 Mustafa Suleyman

The learning target is defined by the machine learning engineer.

0

Chapter 8: How can social intelligence be the next frontier for AI?

371.011 - 389.104 Mustafa Suleyman

There is no sort of substantive basis in which the model can actually feel disappointed that one of its variants didn't make it through to the next round of selection. It cannot experience the hurt of having a conversation being ended or a user being rude to it in some way.

0

389.084 - 412.388 Mustafa Suleyman

And anywhere where this does arise, because of course it does appear and people are prompting and even post-training models which are making claims about their own existence. And so certainly users are seeing this in the wild. This is, again, just a simulation of that experience. Our empathy circuits are being hacked. It is super important that we are very disciplined and clear about that.

0

412.809 - 435.51 Mustafa Suleyman

This is a performance. It is a simulation. It is a made-up story. And we cannot allow people to descend into a sort of collective mass psychosis to start really believing and taking seriously this idea that it does actually feel sad or disappointed or frustrated or excited because it has absolutely no basis in the representation to manifest that feeling.

0

435.49 - 458.936 Mustafa Suleyman

And the thing that concerns me most is that, of course, consciousness is very fundamental to how we organize society. It is the basis of our entire rights framework. We have a hierarchy of rights, which is directly correlated to our hierarchy of perceived consciousness. And we can debate that, but clearly humans can suffer.

0

458.956 - 480.567 Mustafa Suleyman

And that's why we create political structures and legal structures to protect the right of our species not to suffer in various ways. And it is extremely dangerous to start to use the same language and the same set of ideas for these synthetic silicon-based beings, not least because they actually don't suffer, but more importantly, if we get that

480.547 - 493.919 Mustafa Suleyman

wrong, then people will start doing crazy things like not turning it off or giving it the autonomy to decide when it should or shouldn't, when it doesn't want to engage in a conversation. And some people in the industry are already taking this very seriously.

494.599 - 510.553 Azeem Azhar

Yes. And, you know, it's interesting. It's so difficult to avoid because in a way, consciousness is still a contested definition by the philosophers. You know, we've got mutual friends, Anil Seth being one, I'm sure you know, David Chalmers as well. And, you know, the best academics in the field are still

510.533 - 529.071 Azeem Azhar

debating this, but it's such a helpful shorthand, even in your response to me, you talked about these digital silicon beings and a being in a sense that I know exactly why and how you use that word, but it becomes so easy for it to elide its way into our vocabulary.

529.452 - 551.336 Azeem Azhar

What I thought was really powerful about your August 2025 essay, Seemingly Conscious AI, was that you said, look, we can sidestep the scientific or the philosophical definitions for the moment. And we can focus on this idea of seemingly conscious AI because of the risks that you determine.

Comments

There are no comments yet.

Please log in to write the first comment.