Menu
Sign In Search Podcasts Charts People & Topics Add Podcast API Blog Pricing

Francois Chollet

👤 Speaker
See mentions of this person in podcasts
649 total appearances

Appearances Over Time

Podcast Appearances

Dwarkesh Podcast
Francois Chollet, Mike Knoop - LLMs won’t lead to AGI - $1,000,000 Prize to find true solution

But you're still not going to have intelligence.

Dwarkesh Podcast
Francois Chollet, Mike Knoop - LLMs won’t lead to AGI - $1,000,000 Prize to find true solution

So you can ask, okay, so

Dwarkesh Podcast
Francois Chollet, Mike Knoop - LLMs won’t lead to AGI - $1,000,000 Prize to find true solution

What does it matter if we can generate all this economic value?

Dwarkesh Podcast
Francois Chollet, Mike Knoop - LLMs won’t lead to AGI - $1,000,000 Prize to find true solution

Maybe we don't need intelligence after all.

Dwarkesh Podcast
Francois Chollet, Mike Knoop - LLMs won’t lead to AGI - $1,000,000 Prize to find true solution

Well, you need intelligence the moment you have to deal with change, with novelty, with uncertainty.

Dwarkesh Podcast
Francois Chollet, Mike Knoop - LLMs won’t lead to AGI - $1,000,000 Prize to find true solution

As long as you're in a space that can be exactly described in advance, you can just...

Dwarkesh Podcast
Francois Chollet, Mike Knoop - LLMs won’t lead to AGI - $1,000,000 Prize to find true solution

You can just automate via pure memorization, right?

Dwarkesh Podcast
Francois Chollet, Mike Knoop - LLMs won’t lead to AGI - $1,000,000 Prize to find true solution

In fact, you can always solve any problem.

Dwarkesh Podcast
Francois Chollet, Mike Knoop - LLMs won’t lead to AGI - $1,000,000 Prize to find true solution

You can always display arbitrary levels of skills on any task without leveraging any intelligence whatsoever, as long as it is possible to describe the problem and its solution very, very precisely, right?

Dwarkesh Podcast
Francois Chollet, Mike Knoop - LLMs won’t lead to AGI - $1,000,000 Prize to find true solution

No, interpolation is not enough to deal with all kinds of novelty.

Dwarkesh Podcast
Francois Chollet, Mike Knoop - LLMs won’t lead to AGI - $1,000,000 Prize to find true solution

If it were, then LLMs would be AGI.

Dwarkesh Podcast
Francois Chollet, Mike Knoop - LLMs won’t lead to AGI - $1,000,000 Prize to find true solution

Grarking is a very, very old phenomenon.

Dwarkesh Podcast
Francois Chollet, Mike Knoop - LLMs won’t lead to AGI - $1,000,000 Prize to find true solution

We've been observing it for decades.

Dwarkesh Podcast
Francois Chollet, Mike Knoop - LLMs won’t lead to AGI - $1,000,000 Prize to find true solution

It's basically an instance of the minimum description length principle, where, sure, given a problem, you can just memorize a pointwise input-to-output mapping, which is completely overfit.

Dwarkesh Podcast
Francois Chollet, Mike Knoop - LLMs won’t lead to AGI - $1,000,000 Prize to find true solution

So it does not generalize at all, but it solves the problem on the trained data.

Dwarkesh Podcast
Francois Chollet, Mike Knoop - LLMs won’t lead to AGI - $1,000,000 Prize to find true solution

And from there, you can actually keep pruning it, keep making your mapping simpler and simpler and more compressed.

Dwarkesh Podcast
Francois Chollet, Mike Knoop - LLMs won’t lead to AGI - $1,000,000 Prize to find true solution

And at some point, it will start generalizing.

Dwarkesh Podcast
Francois Chollet, Mike Knoop - LLMs won’t lead to AGI - $1,000,000 Prize to find true solution

And so that's something called the minimum description length principle.

Dwarkesh Podcast
Francois Chollet, Mike Knoop - LLMs won’t lead to AGI - $1,000,000 Prize to find true solution

It's this idea that the program that will generalize best is the shortest.

Dwarkesh Podcast
Francois Chollet, Mike Knoop - LLMs won’t lead to AGI - $1,000,000 Prize to find true solution

Right.