Francois Chollet
👤 SpeakerAppearances Over Time
Podcast Appearances
But you're still not going to have intelligence.
So you can ask, okay, so
What does it matter if we can generate all this economic value?
Maybe we don't need intelligence after all.
Well, you need intelligence the moment you have to deal with change, with novelty, with uncertainty.
As long as you're in a space that can be exactly described in advance, you can just...
You can just automate via pure memorization, right?
In fact, you can always solve any problem.
You can always display arbitrary levels of skills on any task without leveraging any intelligence whatsoever, as long as it is possible to describe the problem and its solution very, very precisely, right?
No, interpolation is not enough to deal with all kinds of novelty.
If it were, then LLMs would be AGI.
Grarking is a very, very old phenomenon.
We've been observing it for decades.
It's basically an instance of the minimum description length principle, where, sure, given a problem, you can just memorize a pointwise input-to-output mapping, which is completely overfit.
So it does not generalize at all, but it solves the problem on the trained data.
And from there, you can actually keep pruning it, keep making your mapping simpler and simpler and more compressed.
And at some point, it will start generalizing.
And so that's something called the minimum description length principle.
It's this idea that the program that will generalize best is the shortest.
Right.