Francois Chollet
👤 PersonAppearances Over Time
Podcast Appearances
And importantly, in order to, the sort of like knowledge basis that you need in order to approach these challenges is you just need core knowledge.
And core knowledge is, it's basically the knowledge of what makes an object, basic counting, basic geometry, topology, symmetries, that sort of thing.
So extremely basic knowledge.
LLMs for sure possess such knowledge.
Any child possesses such knowledge.
And what's really interesting is that each puzzle is new.
So it's not something that you're going to find elsewhere on the internet, for instance.
And that means that whether it's as a human or as a machine, every puzzle, you have to approach it from scratch.
You have to actually reason your way through it.
You cannot just fetch the response from your memory.
That's an empirical question, so I guess we're going to see the answer within a few months.
But my answer to that is, you know, arc grids, they're just discrete 2D grids of symbols.
They're pretty small.
It's not like if you flatten an image as a sequence of pixels, for instance, then you get something that's actually very, very difficult to parse.
But that's not true for arc because the grids are very small.
You only have 10 possible symbols.
So there's these 2D grids that are actually very easy to flatten.
as sequences.
And transformers, LLMs, they're very good at processing the sequences.
In fact, you can show that LLMs do fine with processing arc-like data by simply fine-tuning an LLM on some subsets of the tasks and then trying to test it on small variations of these tasks.