Mustafa Suleyman
๐ค SpeakerAppearances Over Time
Podcast Appearances
Having said that, I can also observe some of the behaviors and characteristics.
So for example, part of being conscious is that you're able to use language, demonstrate empathy, refer to your own experience.
So because you have a memory, you can look back and describe a particular moment in your personal history.
And that creates a sense of yourself, an awareness that you did exist in the past and therefore you also exist today and may exist in the future.
And then the second thing is that you have a valenced feeling, you know, so a kind of biased feeling, good or bad, about some past experience or about some present focused experience.
So there's a kind of reaction to what you're experiencing now.
And it's really the awareness of that reaction, which is the subjective experience of being you that you then describe.
Like I feel awkward at the moment or I feel excited or I look back on that experience and I feel sad or happy or whatever it is.
And so when you take that a step further, it's really the suffering that comes from that experience that we think of as being conscious.
Consciousness is really the ability to be happy or to suffer and to have a subjective experience of that and to have a coherent sense of myself from a subjective perspective.
Um, and so part of the complexity of the seemingly consciousness idea is that it's difficult for us to interrogate what is underneath the hallmarks and the behaviors of that consciousness, right?
For, for, for an average person who's not involved in how the technology is being built, you might just assume that, you know, if it quacks like a duck, then it is a duck, which is the kind of classic thought exercise around it.
The problem with that, of course, is that buying into the simulation has a lot of dangers for how we further anthropomorphize these things in the future.
And there is a group of people who are talking about this idea of model welfare, the suffering of an AI, and therefore our moral duty to potentially, if it does suffer,
protect that being from suffering and to try to prevent it from having those experiences which cause that.
And I think this is really, really dangerous because the reason I'm motivated to build AI systems is because we really want them to serve humanity.
We want them to be useful to us.
We want them to save us time, want to make us smarter, want them to give us support and encouragement.
They're a tool that we create to improve the well-being of humanity.
And I fear that adding a new class of sort of rights to these beings would really threaten and kind of undermine our own species.