Yann LeCun
๐ค SpeakerVoice Profile Active
This person's voice can be automatically recognized across podcast episodes using AI voice matching.
Appearances Over Time
Podcast Appearances
They might be able to give you a plan, but only under the condition that they've been trained to produce those kind of plans, right? They're not going to be able to plan for situations that they never encountered before. They basically are going to have to regurgitate the template that they've been trained on.
They might be able to give you a plan, but only under the condition that they've been trained to produce those kind of plans, right? They're not going to be able to plan for situations that they never encountered before. They basically are going to have to regurgitate the template that they've been trained on.
They might be able to give you a plan, but only under the condition that they've been trained to produce those kind of plans, right? They're not going to be able to plan for situations that they never encountered before. They basically are going to have to regurgitate the template that they've been trained on.
Certainly, LLM would be able to solve that problem if you fine-tune it for it. I can't say that LLM cannot do this. It can do this if you train it for it. There's no question. Down to a certain level, where things can be formulated in terms of words. But if you want to go down to how you climb down the stairs or just stand up from your chair in terms of words, you can't do it.
Certainly, LLM would be able to solve that problem if you fine-tune it for it. I can't say that LLM cannot do this. It can do this if you train it for it. There's no question. Down to a certain level, where things can be formulated in terms of words. But if you want to go down to how you climb down the stairs or just stand up from your chair in terms of words, you can't do it.
Certainly, LLM would be able to solve that problem if you fine-tune it for it. I can't say that LLM cannot do this. It can do this if you train it for it. There's no question. Down to a certain level, where things can be formulated in terms of words. But if you want to go down to how you climb down the stairs or just stand up from your chair in terms of words, you can't do it.
That's one of the reasons you need experience of the physical world, which is much higher bandwidth than what you can express in words. In human language.
That's one of the reasons you need experience of the physical world, which is much higher bandwidth than what you can express in words. In human language.
That's one of the reasons you need experience of the physical world, which is much higher bandwidth than what you can express in words. In human language.
Sure. And, you know, a lot of plans that people know about that are relatively high level are actually learned. Most people don't invent the, you know, plans. They... We have some ability to do this, of course, obviously, but most plans that people use are plans that they've been trained on. They've seen other people use those plans or they've been told how to do things.
Sure. And, you know, a lot of plans that people know about that are relatively high level are actually learned. Most people don't invent the, you know, plans. They... We have some ability to do this, of course, obviously, but most plans that people use are plans that they've been trained on. They've seen other people use those plans or they've been told how to do things.
Sure. And, you know, a lot of plans that people know about that are relatively high level are actually learned. Most people don't invent the, you know, plans. They... We have some ability to do this, of course, obviously, but most plans that people use are plans that they've been trained on. They've seen other people use those plans or they've been told how to do things.
You can't invent how you take a person who's never heard of airplanes and tell them, like, how do you go from New York to Paris and... They're probably not going to be able to deconstruct the whole plan unless they've seen examples of that before. So certainly LLMs are going to be able to do this.
You can't invent how you take a person who's never heard of airplanes and tell them, like, how do you go from New York to Paris and... They're probably not going to be able to deconstruct the whole plan unless they've seen examples of that before. So certainly LLMs are going to be able to do this.
You can't invent how you take a person who's never heard of airplanes and tell them, like, how do you go from New York to Paris and... They're probably not going to be able to deconstruct the whole plan unless they've seen examples of that before. So certainly LLMs are going to be able to do this.
But then how you link this from the low level of actions, that needs to be done with things like JEPA that basically lifts the abstraction level of the representation without attempting to reconstruct every detail of the situation. That's why we need JEPAs for it.
But then how you link this from the low level of actions, that needs to be done with things like JEPA that basically lifts the abstraction level of the representation without attempting to reconstruct every detail of the situation. That's why we need JEPAs for it.
But then how you link this from the low level of actions, that needs to be done with things like JEPA that basically lifts the abstraction level of the representation without attempting to reconstruct every detail of the situation. That's why we need JEPAs for it.
No, there's one thing that autoregressive LLMs, or that LLMs in general, not just the autoregressive one, but including the BERT-style bidirectional ones, are exploiting, and it's self-supervised learning. And I've been a very, very strong advocate of self-supervised learning for many years. So those things are an incredibly impressive demonstration that self-supervised learning actually works.
No, there's one thing that autoregressive LLMs, or that LLMs in general, not just the autoregressive one, but including the BERT-style bidirectional ones, are exploiting, and it's self-supervised learning. And I've been a very, very strong advocate of self-supervised learning for many years. So those things are an incredibly impressive demonstration that self-supervised learning actually works.