Max Tegmark
๐ค SpeakerVoice Profile Active
This person's voice can be automatically recognized across podcast episodes using AI voice matching.
Appearances Over Time
Podcast Appearances
And then someone mugs it and tries to break it on the way.
That robot has an incentive to not get destroyed and defend itself or run away because otherwise it's going to fail in cooking you dinner.
It's not afraid of death, but it really wants to complete the dinner cooking goal so it will have a self-preservation instinct.
Continue being a functional agent.
If you give any kind of more ambitious goal to an AGI, it's very likely they want to acquire more resources so it can do that better.
And it's exactly from those sort of sub-goals that we might not have intended that some of the concerns about AGI safety come.
You give it some goal that seems completely harmless, and then...
before you realize it, it's also trying to do these other things which you didn't want it to do.
And it's maybe smarter than us.
Before I can answer, before we can agree on whether it's necessary for intelligence or for consciousness, we should be clear on how we define those two words because a lot of really smart people define them in very different ways.
I was on this panel with AI experts and they couldn't agree on how to define intelligence even.
So I define intelligence simply as the ability to accomplish complex goals.
I like your broad definition because, again, I don't want to be a carbon chauvinist.
In that case, no, certainly it doesn't require fear of death.
I would say AlphaGo, AlphaZero is quite intelligent.