Max Tegmark
๐ค SpeakerVoice Profile Active
This person's voice can be automatically recognized across podcast episodes using AI voice matching.
Appearances Over Time
Podcast Appearances
So, for example, one, this is just a moonshot, but, you know, learning...
It's very much the same thing as search.
If you're trying to train a neural network to get really learned, to do something really well, you have some loss function, you have a bunch of knobs you can turn, represented by a bunch of numbers, and you're trying to tweak them so that it becomes as good as possible at this thing.
So if you think of a landscape with some valley...
where each dimension of the landscape corresponds to some number you can change.
You're trying to find the minimum.
And it's well known that if you have a very high dimensional landscape, complicated things, it's super hard to find the minimum, right?
Quantum mechanics is amazingly good at this.
If I want to know what's the lowest energy state this water can possibly have, incredibly hard to compute, but nature will happily figure this out for you if you just cool it down, make it very, very cold.
If you put a ball somewhere, it'll roll down to its minimum.
And this happens metaphorically at the energy landscape too.
And quantum mechanics even uses some clever tricks, which today's machine learning systems don't.
Like if you're trying to find the minimum and you get stuck in the little local minimum here, in quantum mechanics, you can actually tunnel through the barrier and get unstuck again and
So maybe, for example, we'll one day use quantum computers to help train neural networks better.
I mean, in recent, there's a very wide range of guesses, as you know, among AGI researchers when we're going to get AGI.
Some people, like our friend Rodney Brooks, said it's going to be hundreds of years at least.
And then there are many others who think it's going to happen much sooner.
In recent polls, maybe half or so of AI researchers think we're going to get AGI within decades.