Menu
Sign In Search Podcasts Charts People & Topics Add Podcast API Blog Pricing

Francois Chollet

👤 Speaker
See mentions of this person in podcasts
649 total appearances

Appearances Over Time

Podcast Appearances

Dwarkesh Podcast
Francois Chollet, Mike Knoop - LLMs won’t lead to AGI - $1,000,000 Prize to find true solution

And so far, LMs have not been doing very well on it.

Dwarkesh Podcast
Francois Chollet, Mike Knoop - LLMs won’t lead to AGI - $1,000,000 Prize to find true solution

In fact, the approaches that are working well are more towards discrete program search, program synthesis.

Dwarkesh Podcast
Francois Chollet, Mike Knoop - LLMs won’t lead to AGI - $1,000,000 Prize to find true solution

Right.

Dwarkesh Podcast
Francois Chollet, Mike Knoop - LLMs won’t lead to AGI - $1,000,000 Prize to find true solution

I'm pretty skeptical that we're going to see LLM do 80% in a year.

Dwarkesh Podcast
Francois Chollet, Mike Knoop - LLMs won’t lead to AGI - $1,000,000 Prize to find true solution

That said, if we do see it, you would also have to look at how this was achieved.

Dwarkesh Podcast
Francois Chollet, Mike Knoop - LLMs won’t lead to AGI - $1,000,000 Prize to find true solution

If you just train the model and millions or billions of puzzles similar to Arc, so that you're relying on the ability to have some overlap between the tasks that you train on and the tasks that you're going to see at test time, then you're still using memorization, right?

Dwarkesh Podcast
Francois Chollet, Mike Knoop - LLMs won’t lead to AGI - $1,000,000 Prize to find true solution

And maybe it can work, you know, hopefully

Dwarkesh Podcast
Francois Chollet, Mike Knoop - LLMs won’t lead to AGI - $1,000,000 Prize to find true solution

Arc is going to be good enough that it's going to be resistant to this sort of attempt at brute forcing.

Dwarkesh Podcast
Francois Chollet, Mike Knoop - LLMs won’t lead to AGI - $1,000,000 Prize to find true solution

But you never know.

Dwarkesh Podcast
Francois Chollet, Mike Knoop - LLMs won’t lead to AGI - $1,000,000 Prize to find true solution

Maybe it could happen.

Dwarkesh Podcast
Francois Chollet, Mike Knoop - LLMs won’t lead to AGI - $1,000,000 Prize to find true solution

I'm not saying it's not going to happen.

Dwarkesh Podcast
Francois Chollet, Mike Knoop - LLMs won’t lead to AGI - $1,000,000 Prize to find true solution

Arc is not a perfect benchmark.

Dwarkesh Podcast
Francois Chollet, Mike Knoop - LLMs won’t lead to AGI - $1,000,000 Prize to find true solution

Maybe it has flaws.

Dwarkesh Podcast
Francois Chollet, Mike Knoop - LLMs won’t lead to AGI - $1,000,000 Prize to find true solution

Maybe it could be hacked in that way.

Dwarkesh Podcast
Francois Chollet, Mike Knoop - LLMs won’t lead to AGI - $1,000,000 Prize to find true solution

What would make me change my mind about that, Alain, is basically if I start seeing a critical mass of cases where you show the model with something it has not seen before, a task that's actually novel from the perspective of its training data, something that's not in training data, and if it can actually adapt on the fly,

Dwarkesh Podcast
Francois Chollet, Mike Knoop - LLMs won’t lead to AGI - $1,000,000 Prize to find true solution

And this is true for other lamps, but really this would catch my attention for any AI technique out there.

Dwarkesh Podcast
Francois Chollet, Mike Knoop - LLMs won’t lead to AGI - $1,000,000 Prize to find true solution

If I can see the ability to adapt to novelty on the fly, to pick up new skills efficiently, then I would be extremely interested.

Dwarkesh Podcast
Francois Chollet, Mike Knoop - LLMs won’t lead to AGI - $1,000,000 Prize to find true solution

I would think this is on the path to AGI.

Dwarkesh Podcast
Francois Chollet, Mike Knoop - LLMs won’t lead to AGI - $1,000,000 Prize to find true solution

Right.

Dwarkesh Podcast
Francois Chollet, Mike Knoop - LLMs won’t lead to AGI - $1,000,000 Prize to find true solution

You're asking basically what's the difference between actual intelligence, which is the ability to adapt to things you've not been prepared for, and pure memorization, like reciting what you've seen before.