Max Tegmark
๐ค SpeakerVoice Profile Active
This person's voice can be automatically recognized across podcast episodes using AI voice matching.
Appearances Over Time
Podcast Appearances
It wasn't because we were evil rhino haters as a whole.
It was just because our goals weren't aligned with those of the rhinoceros and it didn't work out so well for the rhinoceros because we were more intelligent.
So I think it's just so important that if we ever do build AGI,
Before we unleash anything, we have to make sure that it learns to understand our goals, that it adopts our goals, and it retains those goals.
We really have to give it our best.
And it's difficult for two separate reasons.
There's the technical value alignment problem of figuring out just how to make
Machines understand our goals, adopt them and retain them.
And then there's the separate part of it, the philosophical part, whose values anyway?
And since it's not like we have any great consensus on this planet on values, what mechanism should we create then to aggregate and decide, okay, what's a good compromise?
That second discussion can't just be left to tech nerds like myself, right?
And if we refuse to talk about it,
and then AGI gets built, who's going to be actually making the decision about whose values?
It's going to be a bunch of dudes and some tech company.
so representative of all of humankind that we want to just entrust it to them?
Or are they even uniquely qualified to speak to future human happiness just because they're good at programming AI?