Yann LeCun
👤 PersonAppearances Over Time
Podcast Appearances
Well, it depends what kind of thinking, right? If it's just, if it's like producing puns, I get much better in French than English about that.
There is an abstract representation of imagining the reaction of a reader to that text.
There is an abstract representation of imagining the reaction of a reader to that text.
There is an abstract representation of imagining the reaction of a reader to that text.
Or figure out a reaction you want to cause and then figure out how to say it so that it causes that reaction. But that's really close to language. But think about a mathematical concept or imagining something you want to build out of wood or something like this, right? the kind of thinking you're doing is absolutely nothing to do with language, really.
Or figure out a reaction you want to cause and then figure out how to say it so that it causes that reaction. But that's really close to language. But think about a mathematical concept or imagining something you want to build out of wood or something like this, right? the kind of thinking you're doing is absolutely nothing to do with language, really.
Or figure out a reaction you want to cause and then figure out how to say it so that it causes that reaction. But that's really close to language. But think about a mathematical concept or imagining something you want to build out of wood or something like this, right? the kind of thinking you're doing is absolutely nothing to do with language, really.
It's not like you have necessarily an internal monologue in any particular language. You're imagining mental models of the thing, right? If I ask you to imagine what this water bottle will look like if I rotate it 90 degrees, that has nothing to do with language.
It's not like you have necessarily an internal monologue in any particular language. You're imagining mental models of the thing, right? If I ask you to imagine what this water bottle will look like if I rotate it 90 degrees, that has nothing to do with language.
It's not like you have necessarily an internal monologue in any particular language. You're imagining mental models of the thing, right? If I ask you to imagine what this water bottle will look like if I rotate it 90 degrees, that has nothing to do with language.
And so clearly there is a more abstract level of representation in which we do most of our thinking and we plan what we're gonna say if the output is you know, uttered words as opposed to an output being, you know, muscle actions, right? We plan our answer before we produce it. And LLMs don't do that. They just produce one word after the other instinctively, if you want.
And so clearly there is a more abstract level of representation in which we do most of our thinking and we plan what we're gonna say if the output is you know, uttered words as opposed to an output being, you know, muscle actions, right? We plan our answer before we produce it. And LLMs don't do that. They just produce one word after the other instinctively, if you want.
And so clearly there is a more abstract level of representation in which we do most of our thinking and we plan what we're gonna say if the output is you know, uttered words as opposed to an output being, you know, muscle actions, right? We plan our answer before we produce it. And LLMs don't do that. They just produce one word after the other instinctively, if you want.
It's like, it's a bit like the, you know, subconscious actions where you don't, Like you're distracted, you're doing something, you're completely concentrated, and someone comes to you and asks you a question, and you kind of answer the question. You don't have time to think about the answer, but the answer is easy, so you don't need to pay attention. You sort of respond automatically.
It's like, it's a bit like the, you know, subconscious actions where you don't, Like you're distracted, you're doing something, you're completely concentrated, and someone comes to you and asks you a question, and you kind of answer the question. You don't have time to think about the answer, but the answer is easy, so you don't need to pay attention. You sort of respond automatically.
It's like, it's a bit like the, you know, subconscious actions where you don't, Like you're distracted, you're doing something, you're completely concentrated, and someone comes to you and asks you a question, and you kind of answer the question. You don't have time to think about the answer, but the answer is easy, so you don't need to pay attention. You sort of respond automatically.
That's kind of what an LLM does, right? It doesn't think about its answer, really. It retrieves it because it's accumulated a lot of knowledge, so it can retrieve some things, but it's going to just spit out one token after the other without planning the answer.
That's kind of what an LLM does, right? It doesn't think about its answer, really. It retrieves it because it's accumulated a lot of knowledge, so it can retrieve some things, but it's going to just spit out one token after the other without planning the answer.
That's kind of what an LLM does, right? It doesn't think about its answer, really. It retrieves it because it's accumulated a lot of knowledge, so it can retrieve some things, but it's going to just spit out one token after the other without planning the answer.
Okay, but then that assumes that those systems actually possess an eternal world model.