Menu
Sign In Search Podcasts Charts People & Topics Add Podcast API Blog Pricing

Yann LeCun

๐Ÿ‘ค Speaker
See mentions of this person in podcasts
1102 total appearances
Voice ID

Voice Profile Active

This person's voice can be automatically recognized across podcast episodes using AI voice matching.

Voice samples: 1
Confidence: Medium

Appearances Over Time

Podcast Appearances

Lex Fridman Podcast
#416 โ€“ Yann Lecun: Meta AI, Open Source, Limits of LLMs, AGI & the Future of AI

What that means is that every time you produce a token, the probability that you stay within the set of correct answers decreases, and it decreases exponentially.

Lex Fridman Podcast
#416 โ€“ Yann Lecun: Meta AI, Open Source, Limits of LLMs, AGI & the Future of AI

What that means is that every time you produce a token, the probability that you stay within the set of correct answers decreases, and it decreases exponentially.

Lex Fridman Podcast
#416 โ€“ Yann Lecun: Meta AI, Open Source, Limits of LLMs, AGI & the Future of AI

What that means is that every time you produce a token, the probability that you stay within the set of correct answers decreases, and it decreases exponentially.

Lex Fridman Podcast
#416 โ€“ Yann Lecun: Meta AI, Open Source, Limits of LLMs, AGI & the Future of AI

Yeah. And that drift is exponential. It's like errors accumulate, right? So the probability that an answer would be nonsensical increases exponentially with the number of tokens.

Lex Fridman Podcast
#416 โ€“ Yann Lecun: Meta AI, Open Source, Limits of LLMs, AGI & the Future of AI

Yeah. And that drift is exponential. It's like errors accumulate, right? So the probability that an answer would be nonsensical increases exponentially with the number of tokens.

Lex Fridman Podcast
#416 โ€“ Yann Lecun: Meta AI, Open Source, Limits of LLMs, AGI & the Future of AI

Yeah. And that drift is exponential. It's like errors accumulate, right? So the probability that an answer would be nonsensical increases exponentially with the number of tokens.

Lex Fridman Podcast
#416 โ€“ Yann Lecun: Meta AI, Open Source, Limits of LLMs, AGI & the Future of AI

No, it's basically a struggle against the curse of dimensionality. So the way you can correct for this is that you fine-tune the system by having it produce answers for all kinds of questions that people might come up with. And people are people, so a lot of the questions that they have are very similar to each other, so you can probably cover 80% or whatever.

Lex Fridman Podcast
#416 โ€“ Yann Lecun: Meta AI, Open Source, Limits of LLMs, AGI & the Future of AI

No, it's basically a struggle against the curse of dimensionality. So the way you can correct for this is that you fine-tune the system by having it produce answers for all kinds of questions that people might come up with. And people are people, so a lot of the questions that they have are very similar to each other, so you can probably cover 80% or whatever.

Lex Fridman Podcast
#416 โ€“ Yann Lecun: Meta AI, Open Source, Limits of LLMs, AGI & the Future of AI

No, it's basically a struggle against the curse of dimensionality. So the way you can correct for this is that you fine-tune the system by having it produce answers for all kinds of questions that people might come up with. And people are people, so a lot of the questions that they have are very similar to each other, so you can probably cover 80% or whatever.

Lex Fridman Podcast
#416 โ€“ Yann Lecun: Meta AI, Open Source, Limits of LLMs, AGI & the Future of AI

of questions that people will ask by collecting data. And then you fine-tune the system to produce good answers for all of those things. And it's probably going to be able to learn that because it's got a lot of capacity to learn. But then there is... you know, the enormous set of prompts that you have not covered during training. And that set is enormous.

Lex Fridman Podcast
#416 โ€“ Yann Lecun: Meta AI, Open Source, Limits of LLMs, AGI & the Future of AI

of questions that people will ask by collecting data. And then you fine-tune the system to produce good answers for all of those things. And it's probably going to be able to learn that because it's got a lot of capacity to learn. But then there is... you know, the enormous set of prompts that you have not covered during training. And that set is enormous.

Lex Fridman Podcast
#416 โ€“ Yann Lecun: Meta AI, Open Source, Limits of LLMs, AGI & the Future of AI

of questions that people will ask by collecting data. And then you fine-tune the system to produce good answers for all of those things. And it's probably going to be able to learn that because it's got a lot of capacity to learn. But then there is... you know, the enormous set of prompts that you have not covered during training. And that set is enormous.

Lex Fridman Podcast
#416 โ€“ Yann Lecun: Meta AI, Open Source, Limits of LLMs, AGI & the Future of AI

Like within the set of all possible prompts, the proportion of prompts that have been used for training is absolutely tiny. It's a tiny, tiny, tiny subset of all possible prompts. And so the system will behave properly on the prompts that has been either trained, pre-trained or fine-tuned.

Lex Fridman Podcast
#416 โ€“ Yann Lecun: Meta AI, Open Source, Limits of LLMs, AGI & the Future of AI

Like within the set of all possible prompts, the proportion of prompts that have been used for training is absolutely tiny. It's a tiny, tiny, tiny subset of all possible prompts. And so the system will behave properly on the prompts that has been either trained, pre-trained or fine-tuned.

Lex Fridman Podcast
#416 โ€“ Yann Lecun: Meta AI, Open Source, Limits of LLMs, AGI & the Future of AI

Like within the set of all possible prompts, the proportion of prompts that have been used for training is absolutely tiny. It's a tiny, tiny, tiny subset of all possible prompts. And so the system will behave properly on the prompts that has been either trained, pre-trained or fine-tuned.

Lex Fridman Podcast
#416 โ€“ Yann Lecun: Meta AI, Open Source, Limits of LLMs, AGI & the Future of AI

But then there is an entire space of things that it cannot possibly have been trained on because the number is gigantic. So whatever training the system has been subject to to produce appropriate answers, you can break it by finding out a prompt that will be outside of the set of prompts it's been trained on, or things that are similar, and then it will just spew complete nonsense.

Lex Fridman Podcast
#416 โ€“ Yann Lecun: Meta AI, Open Source, Limits of LLMs, AGI & the Future of AI

But then there is an entire space of things that it cannot possibly have been trained on because the number is gigantic. So whatever training the system has been subject to to produce appropriate answers, you can break it by finding out a prompt that will be outside of the set of prompts it's been trained on, or things that are similar, and then it will just spew complete nonsense.

Lex Fridman Podcast
#416 โ€“ Yann Lecun: Meta AI, Open Source, Limits of LLMs, AGI & the Future of AI

But then there is an entire space of things that it cannot possibly have been trained on because the number is gigantic. So whatever training the system has been subject to to produce appropriate answers, you can break it by finding out a prompt that will be outside of the set of prompts it's been trained on, or things that are similar, and then it will just spew complete nonsense.

Lex Fridman Podcast
#416 โ€“ Yann Lecun: Meta AI, Open Source, Limits of LLMs, AGI & the Future of AI

I mean, people have come up with things where you put essentially a random sequence of characters in a prompt, and that's enough to kind of throw the system into a mode where it's going to answer something completely different than it would have answered without this. So that's a way to jailbreak the system, basically go outside of its conditioning, right?

Lex Fridman Podcast
#416 โ€“ Yann Lecun: Meta AI, Open Source, Limits of LLMs, AGI & the Future of AI

I mean, people have come up with things where you put essentially a random sequence of characters in a prompt, and that's enough to kind of throw the system into a mode where it's going to answer something completely different than it would have answered without this. So that's a way to jailbreak the system, basically go outside of its conditioning, right?