Elon Musk
👤 SpeakerAppearances Over Time
Podcast Appearances
So any AI that is trying to understand the universe would want to see how humanity develops in the future.
Or that AI is not adhering to its mission.
I'm not saying the AI won't necessarily adhere to its mission, but if it does, a future where it sees the outcome of humanity is more interesting than a future where there are a bunch of rocks.
We're more interesting than rocks.
Well, most of what colonizes the galaxy will be robots.
And why does it not find those more interesting?
It's not like, so you need not just scale, but also scope.
So many copies of the same robot, like some like tiny increase in the number of robots produced is not as interesting as like some microscopic, like you say, like eliminating humanity, how many robots would that get you?
Or how many incremental solar cells would get you?
A very small number.
But you would then lose the information associated with humanity.
You would no longer see how humanity might evolve into the future.
And so I don't think it's going to make sense to eliminate humanity just to have some minuscule increase in the number of robots which are identical to each other.
I don't think humans will be in control of something that is vastly more intelligent than humans.
I'm just trying to be realistic here.
If AI intelligence is vastly more, if AI is like, you know, let's say that there's a million times more silicon intelligence than there is biological.
I think it would be foolish to assume that there's any way to maintain control over that.
Now you can make sure it has the right values, or you can try to have the right values, and at least my theory is that from XAI's mission of understanding the universe, it necessarily means that you want to propagate consciousness into the future, you want to propagate intelligence into the future.
and take a set of things that maximize the scope and scale of consciousness.
So it's not just about scale, it's also about types of consciousness.