Menu
Sign In Search Podcasts Charts People & Topics Add Podcast API Blog Pricing

Francois Chollet

👤 Speaker
See mentions of this person in podcasts
649 total appearances

Appearances Over Time

Podcast Appearances

Dwarkesh Podcast
Francois Chollet, Mike Knoop - LLMs won’t lead to AGI - $1,000,000 Prize to find true solution

And if you want to understand that, you can sort of like...

Dwarkesh Podcast
Francois Chollet, Mike Knoop - LLMs won’t lead to AGI - $1,000,000 Prize to find true solution

compare it, contrast it with deep learning.

Dwarkesh Podcast
Francois Chollet, Mike Knoop - LLMs won’t lead to AGI - $1,000,000 Prize to find true solution

So in deep learning, your model is a parametric, a differentiable parametric curve.

Dwarkesh Podcast
Francois Chollet, Mike Knoop - LLMs won’t lead to AGI - $1,000,000 Prize to find true solution

In program synthesis, your model is a discrete graph of operators.

Dwarkesh Podcast
Francois Chollet, Mike Knoop - LLMs won’t lead to AGI - $1,000,000 Prize to find true solution

So you've got like a set of logical operators, like a domain-specific language.

Dwarkesh Podcast
Francois Chollet, Mike Knoop - LLMs won’t lead to AGI - $1,000,000 Prize to find true solution

You're picking instances of it.

Dwarkesh Podcast
Francois Chollet, Mike Knoop - LLMs won’t lead to AGI - $1,000,000 Prize to find true solution

You're structuring that into a graph that's a program.

Dwarkesh Podcast
Francois Chollet, Mike Knoop - LLMs won’t lead to AGI - $1,000,000 Prize to find true solution

And that's actually very similar to like a program you might write in Python or C++ and so on.

Dwarkesh Podcast
Francois Chollet, Mike Knoop - LLMs won’t lead to AGI - $1,000,000 Prize to find true solution

And in deploying your learning engine, because we are doing machine learning here, like we're trying to automatically learn these models, in deploying your learning engine is gradient descent, right?

Dwarkesh Podcast
Francois Chollet, Mike Knoop - LLMs won’t lead to AGI - $1,000,000 Prize to find true solution

And gradient descent is very compute efficient because you have this very strong informative feedback signal, right?

Dwarkesh Podcast
Francois Chollet, Mike Knoop - LLMs won’t lead to AGI - $1,000,000 Prize to find true solution

about where the solution is so you can get to the solution very quickly.

Dwarkesh Podcast
Francois Chollet, Mike Knoop - LLMs won’t lead to AGI - $1,000,000 Prize to find true solution

But it is very data inefficient, meaning that in order to make it work, you need a dense sampling of the operating space.

Dwarkesh Podcast
Francois Chollet, Mike Knoop - LLMs won’t lead to AGI - $1,000,000 Prize to find true solution

You need a dense sampling of the data distribution.

Dwarkesh Podcast
Francois Chollet, Mike Knoop - LLMs won’t lead to AGI - $1,000,000 Prize to find true solution

And then you're limited to only generalizing within that data distribution.

Dwarkesh Podcast
Francois Chollet, Mike Knoop - LLMs won’t lead to AGI - $1,000,000 Prize to find true solution

And the reason why you have this limitation is because your model is a curve.

Dwarkesh Podcast
Francois Chollet, Mike Knoop - LLMs won’t lead to AGI - $1,000,000 Prize to find true solution

And meanwhile, if you look at discrete program search, the learning engine

Dwarkesh Podcast
Francois Chollet, Mike Knoop - LLMs won’t lead to AGI - $1,000,000 Prize to find true solution

is combinatorial search.

Dwarkesh Podcast
Francois Chollet, Mike Knoop - LLMs won’t lead to AGI - $1,000,000 Prize to find true solution

You're just trying a bunch of programs until you find one that actually meets your spec.

Dwarkesh Podcast
Francois Chollet, Mike Knoop - LLMs won’t lead to AGI - $1,000,000 Prize to find true solution

This process is extremely data efficient.

Dwarkesh Podcast
Francois Chollet, Mike Knoop - LLMs won’t lead to AGI - $1,000,000 Prize to find true solution

You can learn a generalizable program from just one example, two examples, which is why it works so well on Arc, by the way.