Menu
Sign In Search Podcasts Charts People & Topics Add Podcast API Blog Pricing

Yann LeCun

๐Ÿ‘ค Speaker
See mentions of this person in podcasts
1102 total appearances
Voice ID

Voice Profile Active

This person's voice can be automatically recognized across podcast episodes using AI voice matching.

Voice samples: 1
Confidence: Medium

Appearances Over Time

Podcast Appearances

Lex Fridman Podcast
#416 โ€“ Yann Lecun: Meta AI, Open Source, Limits of LLMs, AGI & the Future of AI

I mean, people have come up with things where you put essentially a random sequence of characters in a prompt, and that's enough to kind of throw the system into a mode where it's going to answer something completely different than it would have answered without this. So that's a way to jailbreak the system, basically go outside of its conditioning, right?

Lex Fridman Podcast
#416 โ€“ Yann Lecun: Meta AI, Open Source, Limits of LLMs, AGI & the Future of AI

Yeah, some people have done things like you write a sentence in English or you ask a question in English and it produces a perfectly fine answer. And then you just substitute a few words. by the same word in another language. And all of a sudden, the answer is complete nonsense.

Lex Fridman Podcast
#416 โ€“ Yann Lecun: Meta AI, Open Source, Limits of LLMs, AGI & the Future of AI

Yeah, some people have done things like you write a sentence in English or you ask a question in English and it produces a perfectly fine answer. And then you just substitute a few words. by the same word in another language. And all of a sudden, the answer is complete nonsense.

Lex Fridman Podcast
#416 โ€“ Yann Lecun: Meta AI, Open Source, Limits of LLMs, AGI & the Future of AI

Yeah, some people have done things like you write a sentence in English or you ask a question in English and it produces a perfectly fine answer. And then you just substitute a few words. by the same word in another language. And all of a sudden, the answer is complete nonsense.

Lex Fridman Podcast
#416 โ€“ Yann Lecun: Meta AI, Open Source, Limits of LLMs, AGI & the Future of AI

So the problem is that there is a long tail. Yes. This is an issue that a lot of people have realized in social networks and stuff like that, which is there's a very, very long tail of things that people will ask. And you can fine-tune the system for the 80% or whatever of the things that most people will ask.

Lex Fridman Podcast
#416 โ€“ Yann Lecun: Meta AI, Open Source, Limits of LLMs, AGI & the Future of AI

So the problem is that there is a long tail. Yes. This is an issue that a lot of people have realized in social networks and stuff like that, which is there's a very, very long tail of things that people will ask. And you can fine-tune the system for the 80% or whatever of the things that most people will ask.

Lex Fridman Podcast
#416 โ€“ Yann Lecun: Meta AI, Open Source, Limits of LLMs, AGI & the Future of AI

So the problem is that there is a long tail. Yes. This is an issue that a lot of people have realized in social networks and stuff like that, which is there's a very, very long tail of things that people will ask. And you can fine-tune the system for the 80% or whatever of the things that most people will ask.

Lex Fridman Podcast
#416 โ€“ Yann Lecun: Meta AI, Open Source, Limits of LLMs, AGI & the Future of AI

And then this long tail is so large that you're not going to be able to fine-tune the system for all the conditions. And in the end, the system ends up being kind of a giant lookup table, right, essentially, which is not really what you want. You want systems that can reason, certainly that can plan. So the type of reasoning that takes place in LLM is very, very primitive.

Lex Fridman Podcast
#416 โ€“ Yann Lecun: Meta AI, Open Source, Limits of LLMs, AGI & the Future of AI

And then this long tail is so large that you're not going to be able to fine-tune the system for all the conditions. And in the end, the system ends up being kind of a giant lookup table, right, essentially, which is not really what you want. You want systems that can reason, certainly that can plan. So the type of reasoning that takes place in LLM is very, very primitive.

Lex Fridman Podcast
#416 โ€“ Yann Lecun: Meta AI, Open Source, Limits of LLMs, AGI & the Future of AI

And then this long tail is so large that you're not going to be able to fine-tune the system for all the conditions. And in the end, the system ends up being kind of a giant lookup table, right, essentially, which is not really what you want. You want systems that can reason, certainly that can plan. So the type of reasoning that takes place in LLM is very, very primitive.

Lex Fridman Podcast
#416 โ€“ Yann Lecun: Meta AI, Open Source, Limits of LLMs, AGI & the Future of AI

And the reason you can tell it's primitive is because the amount of computation that is spent per token produced is constant. So if you ask a question and that question has an answer in a given number of token, the amount of computation devoted to computing that answer can be exactly estimated.

Lex Fridman Podcast
#416 โ€“ Yann Lecun: Meta AI, Open Source, Limits of LLMs, AGI & the Future of AI

And the reason you can tell it's primitive is because the amount of computation that is spent per token produced is constant. So if you ask a question and that question has an answer in a given number of token, the amount of computation devoted to computing that answer can be exactly estimated.

Lex Fridman Podcast
#416 โ€“ Yann Lecun: Meta AI, Open Source, Limits of LLMs, AGI & the Future of AI

And the reason you can tell it's primitive is because the amount of computation that is spent per token produced is constant. So if you ask a question and that question has an answer in a given number of token, the amount of computation devoted to computing that answer can be exactly estimated.

Lex Fridman Podcast
#416 โ€“ Yann Lecun: Meta AI, Open Source, Limits of LLMs, AGI & the Future of AI

It's like, you know, it's the size of the prediction network, you know, with its 36 layers or 92 layers or whatever it is, multiplied by number of tokens, that's it. And so essentially it doesn't matter if the question being asked

Lex Fridman Podcast
#416 โ€“ Yann Lecun: Meta AI, Open Source, Limits of LLMs, AGI & the Future of AI

It's like, you know, it's the size of the prediction network, you know, with its 36 layers or 92 layers or whatever it is, multiplied by number of tokens, that's it. And so essentially it doesn't matter if the question being asked

Lex Fridman Podcast
#416 โ€“ Yann Lecun: Meta AI, Open Source, Limits of LLMs, AGI & the Future of AI

It's like, you know, it's the size of the prediction network, you know, with its 36 layers or 92 layers or whatever it is, multiplied by number of tokens, that's it. And so essentially it doesn't matter if the question being asked

Lex Fridman Podcast
#416 โ€“ Yann Lecun: Meta AI, Open Source, Limits of LLMs, AGI & the Future of AI

is simple to answer, complicated to answer, impossible to answer because it's undecidable or something, the amount of computation the system will be able to devote to the answer is constant, or is proportional to the number of tokens produced in the answer, right? This is not the way we work.

Lex Fridman Podcast
#416 โ€“ Yann Lecun: Meta AI, Open Source, Limits of LLMs, AGI & the Future of AI

is simple to answer, complicated to answer, impossible to answer because it's undecidable or something, the amount of computation the system will be able to devote to the answer is constant, or is proportional to the number of tokens produced in the answer, right? This is not the way we work.

Lex Fridman Podcast
#416 โ€“ Yann Lecun: Meta AI, Open Source, Limits of LLMs, AGI & the Future of AI

is simple to answer, complicated to answer, impossible to answer because it's undecidable or something, the amount of computation the system will be able to devote to the answer is constant, or is proportional to the number of tokens produced in the answer, right? This is not the way we work.

Lex Fridman Podcast
#416 โ€“ Yann Lecun: Meta AI, Open Source, Limits of LLMs, AGI & the Future of AI

The way we reason is that when we're faced with a complex problem or a complex question, we spend more time trying to solve it and answer it, right? Because it's more difficult.