Dwarkesh Podcast
Francois Chollet, Mike Knoop - LLMs won’t lead to AGI - $1,000,000 Prize to find true solution
I'm not saying it's not going to happen.
Dwarkesh Podcast
Francois Chollet, Mike Knoop - LLMs won’t lead to AGI - $1,000,000 Prize to find true solution
Arc is not a perfect benchmark.
Dwarkesh Podcast
Francois Chollet, Mike Knoop - LLMs won’t lead to AGI - $1,000,000 Prize to find true solution
Maybe it has flaws.
Dwarkesh Podcast
Francois Chollet, Mike Knoop - LLMs won’t lead to AGI - $1,000,000 Prize to find true solution
Maybe it could be hacked in that way.
Dwarkesh Podcast
Francois Chollet, Mike Knoop - LLMs won’t lead to AGI - $1,000,000 Prize to find true solution
What would make me change my mind about that, Alain, is basically if I start seeing a critical mass of cases where you show the model with something it has not seen before, a task that's actually novel from the perspective of its training data, something that's not in training data, and if it can actually adapt on the fly,
Dwarkesh Podcast
Francois Chollet, Mike Knoop - LLMs won’t lead to AGI - $1,000,000 Prize to find true solution
And this is true for other lamps, but really this would catch my attention for any AI technique out there.
Dwarkesh Podcast
Francois Chollet, Mike Knoop - LLMs won’t lead to AGI - $1,000,000 Prize to find true solution
If I can see the ability to adapt to novelty on the fly, to pick up new skills efficiently, then I would be extremely interested.
Dwarkesh Podcast
Francois Chollet, Mike Knoop - LLMs won’t lead to AGI - $1,000,000 Prize to find true solution
I would think this is on the path to AGI.
Dwarkesh Podcast
Francois Chollet, Mike Knoop - LLMs won’t lead to AGI - $1,000,000 Prize to find true solution
Right.
Dwarkesh Podcast
Francois Chollet, Mike Knoop - LLMs won’t lead to AGI - $1,000,000 Prize to find true solution
You're asking basically what's the difference between actual intelligence, which is the ability to adapt to things you've not been prepared for, and pure memorization, like reciting what you've seen before.
Dwarkesh Podcast
Francois Chollet, Mike Knoop - LLMs won’t lead to AGI - $1,000,000 Prize to find true solution
And it's not just some semantic difference.
Dwarkesh Podcast
Francois Chollet, Mike Knoop - LLMs won’t lead to AGI - $1,000,000 Prize to find true solution
The big difference is that you can never...
Dwarkesh Podcast
Francois Chollet, Mike Knoop - LLMs won’t lead to AGI - $1,000,000 Prize to find true solution
pre-trained on everything that you might see at test time, right?
Dwarkesh Podcast
Francois Chollet, Mike Knoop - LLMs won’t lead to AGI - $1,000,000 Prize to find true solution
Because the world changes all the time.
Dwarkesh Podcast
Francois Chollet, Mike Knoop - LLMs won’t lead to AGI - $1,000,000 Prize to find true solution
So it's not just the fact that the space of possible tasks is infinite.
Dwarkesh Podcast
Francois Chollet, Mike Knoop - LLMs won’t lead to AGI - $1,000,000 Prize to find true solution
And even if you're trained on millions of them, you've only seen zero percent of the total space.
Dwarkesh Podcast
Francois Chollet, Mike Knoop - LLMs won’t lead to AGI - $1,000,000 Prize to find true solution
It's also the fact that the world is changing every day, right?
Dwarkesh Podcast
Francois Chollet, Mike Knoop - LLMs won’t lead to AGI - $1,000,000 Prize to find true solution
This is why we, the human species, has developed intelligence in the first place.
Dwarkesh Podcast
Francois Chollet, Mike Knoop - LLMs won’t lead to AGI - $1,000,000 Prize to find true solution
If there was such a thing as a distribution for the world, for the universe, for our lives, then we would not need intelligence at all.
Dwarkesh Podcast
Francois Chollet, Mike Knoop - LLMs won’t lead to AGI - $1,000,000 Prize to find true solution
In fact, many creatures, many insects, for instance, do not have intelligence.