Menu
Sign In Search Podcasts Charts People & Topics Add Podcast API Blog Pricing

Yann LeCun

๐Ÿ‘ค Speaker
See mentions of this person in podcasts
1102 total appearances
Voice ID

Voice Profile Active

This person's voice can be automatically recognized across podcast episodes using AI voice matching.

Voice samples: 1
Confidence: Medium

Appearances Over Time

Podcast Appearances

Lex Fridman Podcast
#416 โ€“ Yann Lecun: Meta AI, Open Source, Limits of LLMs, AGI & the Future of AI

more resources to complex problems than to simple problems. And it's not going to be autoregressive prediction of tokens. It's going to be more something akin to inference of latent variables in what used to be called probabilistic models or graphical models and things of that type. So basically, the principle is like this. The prompt is like observed variables.

Lex Fridman Podcast
#416 โ€“ Yann Lecun: Meta AI, Open Source, Limits of LLMs, AGI & the Future of AI

more resources to complex problems than to simple problems. And it's not going to be autoregressive prediction of tokens. It's going to be more something akin to inference of latent variables in what used to be called probabilistic models or graphical models and things of that type. So basically, the principle is like this. The prompt is like observed variables.

Lex Fridman Podcast
#416 โ€“ Yann Lecun: Meta AI, Open Source, Limits of LLMs, AGI & the Future of AI

more resources to complex problems than to simple problems. And it's not going to be autoregressive prediction of tokens. It's going to be more something akin to inference of latent variables in what used to be called probabilistic models or graphical models and things of that type. So basically, the principle is like this. The prompt is like observed variables.

Lex Fridman Podcast
#416 โ€“ Yann Lecun: Meta AI, Open Source, Limits of LLMs, AGI & the Future of AI

And what the model does is that it can measure to what extent an answer is a good answer for a prompt. So think of it as some gigantic neural net, but it's got only one output. And that output is a scalar number, which is, let's say, zero if the answer is a good answer for the question, and a large number if the answer is not a good answer for the question. Imagine you had this model.

Lex Fridman Podcast
#416 โ€“ Yann Lecun: Meta AI, Open Source, Limits of LLMs, AGI & the Future of AI

And what the model does is that it can measure to what extent an answer is a good answer for a prompt. So think of it as some gigantic neural net, but it's got only one output. And that output is a scalar number, which is, let's say, zero if the answer is a good answer for the question, and a large number if the answer is not a good answer for the question. Imagine you had this model.

Lex Fridman Podcast
#416 โ€“ Yann Lecun: Meta AI, Open Source, Limits of LLMs, AGI & the Future of AI

And what the model does is that it can measure to what extent an answer is a good answer for a prompt. So think of it as some gigantic neural net, but it's got only one output. And that output is a scalar number, which is, let's say, zero if the answer is a good answer for the question, and a large number if the answer is not a good answer for the question. Imagine you had this model.

Lex Fridman Podcast
#416 โ€“ Yann Lecun: Meta AI, Open Source, Limits of LLMs, AGI & the Future of AI

If you had such a model, you could use it to produce good answers. The way you would do is produce the prompt and then search through the space of possible answers for one that minimizes that number. That's called an energy-based model.

Lex Fridman Podcast
#416 โ€“ Yann Lecun: Meta AI, Open Source, Limits of LLMs, AGI & the Future of AI

If you had such a model, you could use it to produce good answers. The way you would do is produce the prompt and then search through the space of possible answers for one that minimizes that number. That's called an energy-based model.

Lex Fridman Podcast
#416 โ€“ Yann Lecun: Meta AI, Open Source, Limits of LLMs, AGI & the Future of AI

If you had such a model, you could use it to produce good answers. The way you would do is produce the prompt and then search through the space of possible answers for one that minimizes that number. That's called an energy-based model.

Lex Fridman Podcast
#416 โ€“ Yann Lecun: Meta AI, Open Source, Limits of LLMs, AGI & the Future of AI

Well, so really what you need to do would be to not search over possible strings of text that minimize that energy. But what you would do is do this in abstract representation space. So in sort of the space of abstract thoughts, you would elaborate a thought, right, using this process of minimizing the output of your model, okay, which is just a scalar. It's an optimization process.

Lex Fridman Podcast
#416 โ€“ Yann Lecun: Meta AI, Open Source, Limits of LLMs, AGI & the Future of AI

Well, so really what you need to do would be to not search over possible strings of text that minimize that energy. But what you would do is do this in abstract representation space. So in sort of the space of abstract thoughts, you would elaborate a thought, right, using this process of minimizing the output of your model, okay, which is just a scalar. It's an optimization process.

Lex Fridman Podcast
#416 โ€“ Yann Lecun: Meta AI, Open Source, Limits of LLMs, AGI & the Future of AI

Well, so really what you need to do would be to not search over possible strings of text that minimize that energy. But what you would do is do this in abstract representation space. So in sort of the space of abstract thoughts, you would elaborate a thought, right, using this process of minimizing the output of your model, okay, which is just a scalar. It's an optimization process.

Lex Fridman Podcast
#416 โ€“ Yann Lecun: Meta AI, Open Source, Limits of LLMs, AGI & the Future of AI

So now the way the system produces its answer is through optimization, by minimizing an objective function, basically. And we're talking about inference, we're not talking about training. The system has been trained already. So now we have an abstract representation of the thought of the answer, representation of the answer. We feed that to, basically, an autoregressive decoder,

Lex Fridman Podcast
#416 โ€“ Yann Lecun: Meta AI, Open Source, Limits of LLMs, AGI & the Future of AI

So now the way the system produces its answer is through optimization, by minimizing an objective function, basically. And we're talking about inference, we're not talking about training. The system has been trained already. So now we have an abstract representation of the thought of the answer, representation of the answer. We feed that to, basically, an autoregressive decoder,

Lex Fridman Podcast
#416 โ€“ Yann Lecun: Meta AI, Open Source, Limits of LLMs, AGI & the Future of AI

So now the way the system produces its answer is through optimization, by minimizing an objective function, basically. And we're talking about inference, we're not talking about training. The system has been trained already. So now we have an abstract representation of the thought of the answer, representation of the answer. We feed that to, basically, an autoregressive decoder,

Lex Fridman Podcast
#416 โ€“ Yann Lecun: Meta AI, Open Source, Limits of LLMs, AGI & the Future of AI

which can be very simple, that turns this into a text that expresses this thought. So that, in my opinion, is the blueprint of future dialogue systems. They will think about their answer, plan their answer by optimization before turning it into text. And that is Turing-complete.

Lex Fridman Podcast
#416 โ€“ Yann Lecun: Meta AI, Open Source, Limits of LLMs, AGI & the Future of AI

which can be very simple, that turns this into a text that expresses this thought. So that, in my opinion, is the blueprint of future dialogue systems. They will think about their answer, plan their answer by optimization before turning it into text. And that is Turing-complete.

Lex Fridman Podcast
#416 โ€“ Yann Lecun: Meta AI, Open Source, Limits of LLMs, AGI & the Future of AI

which can be very simple, that turns this into a text that expresses this thought. So that, in my opinion, is the blueprint of future dialogue systems. They will think about their answer, plan their answer by optimization before turning it into text. And that is Turing-complete.

Lex Fridman Podcast
#416 โ€“ Yann Lecun: Meta AI, Open Source, Limits of LLMs, AGI & the Future of AI

The space of representations. It goes abstract representation. Abstract representation. So you have an abstract representation inside the system. You have a prompt. The prompt goes through an encoder, produces a representation, perhaps goes through a predictor that predicts a representation of the answer, of the proper answer.

Lex Fridman Podcast
#416 โ€“ Yann Lecun: Meta AI, Open Source, Limits of LLMs, AGI & the Future of AI

The space of representations. It goes abstract representation. Abstract representation. So you have an abstract representation inside the system. You have a prompt. The prompt goes through an encoder, produces a representation, perhaps goes through a predictor that predicts a representation of the answer, of the proper answer.