Francois Chollet
👤 SpeakerAppearances Over Time
Podcast Appearances
Everyone knows what a kaleidoscope is, right?
It's like this cardboard tube with a few bits of colored glass in it.
These few bits of original information get mirrored and repeated and transformed, and they create this tremendous richness of complex patterns.
It's beautiful.
The kaleidoscope hypothesis is this idea that the world in general and any domain in particular follows the same structure, that it appears on the surface to be extremely rich and complex,
and infinitely novel with every passing moment.
But in reality, it is made from the repetition and composition of just a few atoms of meaning.
A big part of intelligence is the process of mining your experience of the world to identify bits that are repeated.
and to extract them, extract these unique atoms of meaning.
When we extract them, we call them abstractions.
Sure.
So ARC is intended as a kind of IQ test for machine intelligence.
And what makes it different from most LLM benchmarks out there is that it's designed to be resistant to memorization.
So if you look at the way LLMs work, they're basically this big interpolative memory.
And the way you scale up their capabilities is by trying to cram as much knowledge and patterns as possible into them.
And by contrast, ARC does not require a lot of knowledge at all.
It's designed to only require what's known as core knowledge, which is basic knowledge about things like elementary physics, objectness, counting, that sort of thing.
The sort of knowledge that any four-year-old or five-year-old possesses, right?
But what's interesting is that each puzzle in Arc is novel, is something that you've probably not encountered before, even if you've memorized the entire internet.
And that's what makes Arc challenging for LMs.