Menu
Sign In Search Podcasts Charts People & Topics Add Podcast API Blog Pricing

Yann LeCun

๐Ÿ‘ค Speaker
See mentions of this person in podcasts
1102 total appearances
Voice ID

Voice Profile Active

This person's voice can be automatically recognized across podcast episodes using AI voice matching.

Voice samples: 1
Confidence: Medium

Appearances Over Time

Podcast Appearances

Lex Fridman Podcast
#416 โ€“ Yann Lecun: Meta AI, Open Source, Limits of LLMs, AGI & the Future of AI

What's had the transformational effect is human feedback. There's many ways to use it, and some of it is just purely supervised, actually. It's not really reinforced by learning.

Lex Fridman Podcast
#416 โ€“ Yann Lecun: Meta AI, Open Source, Limits of LLMs, AGI & the Future of AI

What's had the transformational effect is human feedback. There's many ways to use it, and some of it is just purely supervised, actually. It's not really reinforced by learning.

Lex Fridman Podcast
#416 โ€“ Yann Lecun: Meta AI, Open Source, Limits of LLMs, AGI & the Future of AI

What's had the transformational effect is human feedback. There's many ways to use it, and some of it is just purely supervised, actually. It's not really reinforced by learning.

Lex Fridman Podcast
#416 โ€“ Yann Lecun: Meta AI, Open Source, Limits of LLMs, AGI & the Future of AI

It's the HF. And then there is various ways to use human feedback, right? So you can ask humans to rate answers, multiple answers that are produced by a world model. And then what you do is you train an objective function to predict that rating. And then you can use that objective function to predict whether an answer is good.

Lex Fridman Podcast
#416 โ€“ Yann Lecun: Meta AI, Open Source, Limits of LLMs, AGI & the Future of AI

It's the HF. And then there is various ways to use human feedback, right? So you can ask humans to rate answers, multiple answers that are produced by a world model. And then what you do is you train an objective function to predict that rating. And then you can use that objective function to predict whether an answer is good.

Lex Fridman Podcast
#416 โ€“ Yann Lecun: Meta AI, Open Source, Limits of LLMs, AGI & the Future of AI

It's the HF. And then there is various ways to use human feedback, right? So you can ask humans to rate answers, multiple answers that are produced by a world model. And then what you do is you train an objective function to predict that rating. And then you can use that objective function to predict whether an answer is good.

Lex Fridman Podcast
#416 โ€“ Yann Lecun: Meta AI, Open Source, Limits of LLMs, AGI & the Future of AI

And you can backpropagate gradient through this to fine-tune your system so that it only produces highly rated answers. That's one way. In RL, that means training what's called a reward model. Basically, a small neural net that estimates to what extent an answer is good.

Lex Fridman Podcast
#416 โ€“ Yann Lecun: Meta AI, Open Source, Limits of LLMs, AGI & the Future of AI

And you can backpropagate gradient through this to fine-tune your system so that it only produces highly rated answers. That's one way. In RL, that means training what's called a reward model. Basically, a small neural net that estimates to what extent an answer is good.

Lex Fridman Podcast
#416 โ€“ Yann Lecun: Meta AI, Open Source, Limits of LLMs, AGI & the Future of AI

And you can backpropagate gradient through this to fine-tune your system so that it only produces highly rated answers. That's one way. In RL, that means training what's called a reward model. Basically, a small neural net that estimates to what extent an answer is good.

Lex Fridman Podcast
#416 โ€“ Yann Lecun: Meta AI, Open Source, Limits of LLMs, AGI & the Future of AI

It's very similar to the objective I was talking about earlier for planning, except now it's not used for planning, it's used for fine-tuning your system. I think it would be much more efficient to use it for planning, but currently it's used to fine-tune the parameters of the system. Now, there are several ways to do this. Some of them are supervised.

Lex Fridman Podcast
#416 โ€“ Yann Lecun: Meta AI, Open Source, Limits of LLMs, AGI & the Future of AI

It's very similar to the objective I was talking about earlier for planning, except now it's not used for planning, it's used for fine-tuning your system. I think it would be much more efficient to use it for planning, but currently it's used to fine-tune the parameters of the system. Now, there are several ways to do this. Some of them are supervised.

Lex Fridman Podcast
#416 โ€“ Yann Lecun: Meta AI, Open Source, Limits of LLMs, AGI & the Future of AI

It's very similar to the objective I was talking about earlier for planning, except now it's not used for planning, it's used for fine-tuning your system. I think it would be much more efficient to use it for planning, but currently it's used to fine-tune the parameters of the system. Now, there are several ways to do this. Some of them are supervised.

Lex Fridman Podcast
#416 โ€“ Yann Lecun: Meta AI, Open Source, Limits of LLMs, AGI & the Future of AI

You just ask a human person, what is a good answer for this? Then you just type the answer. I mean, there's lots of ways that those systems are being adjusted.

Lex Fridman Podcast
#416 โ€“ Yann Lecun: Meta AI, Open Source, Limits of LLMs, AGI & the Future of AI

You just ask a human person, what is a good answer for this? Then you just type the answer. I mean, there's lots of ways that those systems are being adjusted.

Lex Fridman Podcast
#416 โ€“ Yann Lecun: Meta AI, Open Source, Limits of LLMs, AGI & the Future of AI

You just ask a human person, what is a good answer for this? Then you just type the answer. I mean, there's lots of ways that those systems are being adjusted.

Lex Fridman Podcast
#416 โ€“ Yann Lecun: Meta AI, Open Source, Limits of LLMs, AGI & the Future of AI

I actually made that comment on just about every social network I can, and I've made that point multiple times in various forums. Here's my point of view on this. People can complain that AI systems are biased, and they generally are biased by the distribution of the training data that they've been using.

Lex Fridman Podcast
#416 โ€“ Yann Lecun: Meta AI, Open Source, Limits of LLMs, AGI & the Future of AI

I actually made that comment on just about every social network I can, and I've made that point multiple times in various forums. Here's my point of view on this. People can complain that AI systems are biased, and they generally are biased by the distribution of the training data that they've been using.

Lex Fridman Podcast
#416 โ€“ Yann Lecun: Meta AI, Open Source, Limits of LLMs, AGI & the Future of AI

I actually made that comment on just about every social network I can, and I've made that point multiple times in various forums. Here's my point of view on this. People can complain that AI systems are biased, and they generally are biased by the distribution of the training data that they've been using.

Lex Fridman Podcast
#416 โ€“ Yann Lecun: Meta AI, Open Source, Limits of LLMs, AGI & the Future of AI

trend on, that reflects biases in society and that is potentially offensive to some people or potentially not. And some techniques to de-bias then become offensive to some people because of historical incorrectness and things like that. And so you can ask the question, you can ask two questions. The first question is, is it possible to produce an AI system that is not biased?

Lex Fridman Podcast
#416 โ€“ Yann Lecun: Meta AI, Open Source, Limits of LLMs, AGI & the Future of AI

trend on, that reflects biases in society and that is potentially offensive to some people or potentially not. And some techniques to de-bias then become offensive to some people because of historical incorrectness and things like that. And so you can ask the question, you can ask two questions. The first question is, is it possible to produce an AI system that is not biased?