Ilya Sutskever
๐ค SpeakerAppearances Over Time
Podcast Appearances
And I think that everyone will actually want that.
It's like the AI that's robustly aligned to care about sentient life specifically.
I think in particular, there's a case to be made that it will be easier to build an AI that cares about sentient life than an AI that cares about human life alone.
Because the AI itself will be sentient.
And if you think about things like mirror neurons and human empathy for animals, which is, you know, you might argue it's not big enough, but it exists.
I think it's an emergent property from the fact that we model others with the same circuit that we used to model ourselves, because that's the most efficient thing to do.
It's true.
I think that...
It's possible.
It's not the best criterion.
I'll say two things.
I think that... Thing number one.
I think that if there... So...
I think that care for sentient life, I think there is merit to it.
I think it should be considered.
I think that it will be helpful if there was some kind of a shortlist of ideas that then the companies, when they are in this situation, could use.
That's number two.
Number three, I think it would be really materially helpful if the power of the most powerful superintelligence was somehow capped.
because it would address a lot of these concerns.
The question of how to do it, I'm not sure, but I think that would be materially helpful when you're talking about really, really powerful systems.