Mustafa Suleyman
๐ค SpeakerAppearances Over Time
Podcast Appearances
It sounds really simplistic and obvious, but
It's a good place to start.
I mean, to me, that's what a humanist superintelligence is.
What does it mean to have an AI that is truly aligned to human interests and isn't trying to exceed or escape human control?
One that is always on our team, in our corner, fighting for our interests, doing our work.
And so, you know, it's not necessarily a tool because a tool does exactly what you want it to do.
And there's a kind of one-to-one connection between your control and its kind of execution.
But the tool metaphor is still a good one, even though these things have some autonomy or they'll have their own agency to some extent, but they still have to fundamentally be accountable to us.
And the companies that make them need to be liable for the consequences, you know, for humanity, because that's really what's at stake.
Well, I think the first thing that we have to establish is that from a technical perspective, they might have the hallmarks of consciousness, but they don't have the experience of consciousness, right?
So the inner workings, like what are we trying to protect?
They don't have a pain network, right?
And this is sort of what I've been arguing with the model welfare advocates, because I think it's just such a misrepresentation of how the inner workings of these things actually operate.
I think it's super dangerous.
So the question of their entitlement to protection doesn't even come up.
Because they don't have suffering, they don't have guilt or shame or fear or anxiety or indeed hunger, sex drive, replication drive, or any of those other preferences.
Those preferences come from our evolved biological networks.
we have grown up from amoeba and our survival of the fittest function has been what has driven which parts of us survive and which parts don't.
And that's taken hundreds of millions of years to get us to this point.
And that's why we've got this