Menu
Sign In Search Podcasts Charts People & Topics Add Podcast API Blog Pricing

Yann LeCun

๐Ÿ‘ค Speaker
See mentions of this person in podcasts
1102 total appearances
Voice ID

Voice Profile Active

This person's voice can be automatically recognized across podcast episodes using AI voice matching.

Voice samples: 1
Confidence: Medium

Appearances Over Time

Podcast Appearances

Lex Fridman Podcast
#416 โ€“ Yann Lecun: Meta AI, Open Source, Limits of LLMs, AGI & the Future of AI

Or figure out a reaction you want to cause and then figure out how to say it so that it causes that reaction. But that's really close to language. But think about a mathematical concept or imagining something you want to build out of wood or something like this, right? the kind of thinking you're doing is absolutely nothing to do with language, really.

Lex Fridman Podcast
#416 โ€“ Yann Lecun: Meta AI, Open Source, Limits of LLMs, AGI & the Future of AI

Or figure out a reaction you want to cause and then figure out how to say it so that it causes that reaction. But that's really close to language. But think about a mathematical concept or imagining something you want to build out of wood or something like this, right? the kind of thinking you're doing is absolutely nothing to do with language, really.

Lex Fridman Podcast
#416 โ€“ Yann Lecun: Meta AI, Open Source, Limits of LLMs, AGI & the Future of AI

Or figure out a reaction you want to cause and then figure out how to say it so that it causes that reaction. But that's really close to language. But think about a mathematical concept or imagining something you want to build out of wood or something like this, right? the kind of thinking you're doing is absolutely nothing to do with language, really.

Lex Fridman Podcast
#416 โ€“ Yann Lecun: Meta AI, Open Source, Limits of LLMs, AGI & the Future of AI

It's not like you have necessarily an internal monologue in any particular language. You're imagining mental models of the thing, right? If I ask you to imagine what this water bottle will look like if I rotate it 90 degrees, that has nothing to do with language.

Lex Fridman Podcast
#416 โ€“ Yann Lecun: Meta AI, Open Source, Limits of LLMs, AGI & the Future of AI

It's not like you have necessarily an internal monologue in any particular language. You're imagining mental models of the thing, right? If I ask you to imagine what this water bottle will look like if I rotate it 90 degrees, that has nothing to do with language.

Lex Fridman Podcast
#416 โ€“ Yann Lecun: Meta AI, Open Source, Limits of LLMs, AGI & the Future of AI

It's not like you have necessarily an internal monologue in any particular language. You're imagining mental models of the thing, right? If I ask you to imagine what this water bottle will look like if I rotate it 90 degrees, that has nothing to do with language.

Lex Fridman Podcast
#416 โ€“ Yann Lecun: Meta AI, Open Source, Limits of LLMs, AGI & the Future of AI

And so clearly there is a more abstract level of representation in which we do most of our thinking and we plan what we're gonna say if the output is you know, uttered words as opposed to an output being, you know, muscle actions, right? We plan our answer before we produce it. And LLMs don't do that. They just produce one word after the other instinctively, if you want.

Lex Fridman Podcast
#416 โ€“ Yann Lecun: Meta AI, Open Source, Limits of LLMs, AGI & the Future of AI

And so clearly there is a more abstract level of representation in which we do most of our thinking and we plan what we're gonna say if the output is you know, uttered words as opposed to an output being, you know, muscle actions, right? We plan our answer before we produce it. And LLMs don't do that. They just produce one word after the other instinctively, if you want.

Lex Fridman Podcast
#416 โ€“ Yann Lecun: Meta AI, Open Source, Limits of LLMs, AGI & the Future of AI

And so clearly there is a more abstract level of representation in which we do most of our thinking and we plan what we're gonna say if the output is you know, uttered words as opposed to an output being, you know, muscle actions, right? We plan our answer before we produce it. And LLMs don't do that. They just produce one word after the other instinctively, if you want.

Lex Fridman Podcast
#416 โ€“ Yann Lecun: Meta AI, Open Source, Limits of LLMs, AGI & the Future of AI

It's like, it's a bit like the, you know, subconscious actions where you don't, Like you're distracted, you're doing something, you're completely concentrated, and someone comes to you and asks you a question, and you kind of answer the question. You don't have time to think about the answer, but the answer is easy, so you don't need to pay attention. You sort of respond automatically.

Lex Fridman Podcast
#416 โ€“ Yann Lecun: Meta AI, Open Source, Limits of LLMs, AGI & the Future of AI

It's like, it's a bit like the, you know, subconscious actions where you don't, Like you're distracted, you're doing something, you're completely concentrated, and someone comes to you and asks you a question, and you kind of answer the question. You don't have time to think about the answer, but the answer is easy, so you don't need to pay attention. You sort of respond automatically.

Lex Fridman Podcast
#416 โ€“ Yann Lecun: Meta AI, Open Source, Limits of LLMs, AGI & the Future of AI

It's like, it's a bit like the, you know, subconscious actions where you don't, Like you're distracted, you're doing something, you're completely concentrated, and someone comes to you and asks you a question, and you kind of answer the question. You don't have time to think about the answer, but the answer is easy, so you don't need to pay attention. You sort of respond automatically.

Lex Fridman Podcast
#416 โ€“ Yann Lecun: Meta AI, Open Source, Limits of LLMs, AGI & the Future of AI

That's kind of what an LLM does, right? It doesn't think about its answer, really. It retrieves it because it's accumulated a lot of knowledge, so it can retrieve some things, but it's going to just spit out one token after the other without planning the answer.

Lex Fridman Podcast
#416 โ€“ Yann Lecun: Meta AI, Open Source, Limits of LLMs, AGI & the Future of AI

That's kind of what an LLM does, right? It doesn't think about its answer, really. It retrieves it because it's accumulated a lot of knowledge, so it can retrieve some things, but it's going to just spit out one token after the other without planning the answer.

Lex Fridman Podcast
#416 โ€“ Yann Lecun: Meta AI, Open Source, Limits of LLMs, AGI & the Future of AI

That's kind of what an LLM does, right? It doesn't think about its answer, really. It retrieves it because it's accumulated a lot of knowledge, so it can retrieve some things, but it's going to just spit out one token after the other without planning the answer.

Lex Fridman Podcast
#416 โ€“ Yann Lecun: Meta AI, Open Source, Limits of LLMs, AGI & the Future of AI

Okay, but then that assumes that those systems actually possess an eternal world model.

Lex Fridman Podcast
#416 โ€“ Yann Lecun: Meta AI, Open Source, Limits of LLMs, AGI & the Future of AI

Okay, but then that assumes that those systems actually possess an eternal world model.

Lex Fridman Podcast
#416 โ€“ Yann Lecun: Meta AI, Open Source, Limits of LLMs, AGI & the Future of AI

Okay, but then that assumes that those systems actually possess an eternal world model.

Lex Fridman Podcast
#416 โ€“ Yann Lecun: Meta AI, Open Source, Limits of LLMs, AGI & the Future of AI

Yeah. So can you build this, first of all, by prediction? Right. And the answer is probably yes. Can you build it by predicting words? And the answer is most probably no, because language is very poor in terms of weak or low bandwidth, if you want. There's just not enough information there.

Lex Fridman Podcast
#416 โ€“ Yann Lecun: Meta AI, Open Source, Limits of LLMs, AGI & the Future of AI

Yeah. So can you build this, first of all, by prediction? Right. And the answer is probably yes. Can you build it by predicting words? And the answer is most probably no, because language is very poor in terms of weak or low bandwidth, if you want. There's just not enough information there.