Menu
Sign In Search Podcasts Charts People & Topics Add Podcast API Blog Pricing

Yann LeCun

๐Ÿ‘ค Speaker
See mentions of this person in podcasts
1102 total appearances
Voice ID

Voice Profile Active

This person's voice can be automatically recognized across podcast episodes using AI voice matching.

Voice samples: 1
Confidence: Medium

Appearances Over Time

Podcast Appearances

Lex Fridman Podcast
#416 โ€“ Yann Lecun: Meta AI, Open Source, Limits of LLMs, AGI & the Future of AI

an internal model that says, here is my idea of state of the world at time t, here is an action I'm taking, here is a prediction of the state of the world at time t plus one, t plus delta t, t plus two seconds, whatever it is. If you have a model of this type, you can use it for planning.

Lex Fridman Podcast
#416 โ€“ Yann Lecun: Meta AI, Open Source, Limits of LLMs, AGI & the Future of AI

So now you can do what LLMs cannot do, which is planning what you're going to do so as to arrive at a particular outcome or satisfy a particular objective. So you can have a number of objectives. I can predict that if I have an object like this and I open my hand, it's going to fall. And if I push it with a particular force on the table, it's going to move.

Lex Fridman Podcast
#416 โ€“ Yann Lecun: Meta AI, Open Source, Limits of LLMs, AGI & the Future of AI

So now you can do what LLMs cannot do, which is planning what you're going to do so as to arrive at a particular outcome or satisfy a particular objective. So you can have a number of objectives. I can predict that if I have an object like this and I open my hand, it's going to fall. And if I push it with a particular force on the table, it's going to move.

Lex Fridman Podcast
#416 โ€“ Yann Lecun: Meta AI, Open Source, Limits of LLMs, AGI & the Future of AI

So now you can do what LLMs cannot do, which is planning what you're going to do so as to arrive at a particular outcome or satisfy a particular objective. So you can have a number of objectives. I can predict that if I have an object like this and I open my hand, it's going to fall. And if I push it with a particular force on the table, it's going to move.

Lex Fridman Podcast
#416 โ€“ Yann Lecun: Meta AI, Open Source, Limits of LLMs, AGI & the Future of AI

If I push the table itself, it's probably not going to move with the same force. So we have this internal model of the world in our mind, which allows us to plan sequences of actions to arrive at a particular goal. And so now if you have this world model, we can imagine a sequence of actions, predict what the outcome of the sequence of action is going to be,

Lex Fridman Podcast
#416 โ€“ Yann Lecun: Meta AI, Open Source, Limits of LLMs, AGI & the Future of AI

If I push the table itself, it's probably not going to move with the same force. So we have this internal model of the world in our mind, which allows us to plan sequences of actions to arrive at a particular goal. And so now if you have this world model, we can imagine a sequence of actions, predict what the outcome of the sequence of action is going to be,

Lex Fridman Podcast
#416 โ€“ Yann Lecun: Meta AI, Open Source, Limits of LLMs, AGI & the Future of AI

If I push the table itself, it's probably not going to move with the same force. So we have this internal model of the world in our mind, which allows us to plan sequences of actions to arrive at a particular goal. And so now if you have this world model, we can imagine a sequence of actions, predict what the outcome of the sequence of action is going to be,

Lex Fridman Podcast
#416 โ€“ Yann Lecun: Meta AI, Open Source, Limits of LLMs, AGI & the Future of AI

measure to what extent the final state satisfies a particular objective, like moving the bottle to the left of the table, and then plan a sequence of actions that will minimize this objective at runtime. We're not talking about learning, we're talking about inference time. So this is planning, really. And in optimal control, this is a very classical thing. It's called model predictive control.

Lex Fridman Podcast
#416 โ€“ Yann Lecun: Meta AI, Open Source, Limits of LLMs, AGI & the Future of AI

measure to what extent the final state satisfies a particular objective, like moving the bottle to the left of the table, and then plan a sequence of actions that will minimize this objective at runtime. We're not talking about learning, we're talking about inference time. So this is planning, really. And in optimal control, this is a very classical thing. It's called model predictive control.

Lex Fridman Podcast
#416 โ€“ Yann Lecun: Meta AI, Open Source, Limits of LLMs, AGI & the Future of AI

measure to what extent the final state satisfies a particular objective, like moving the bottle to the left of the table, and then plan a sequence of actions that will minimize this objective at runtime. We're not talking about learning, we're talking about inference time. So this is planning, really. And in optimal control, this is a very classical thing. It's called model predictive control.

Lex Fridman Podcast
#416 โ€“ Yann Lecun: Meta AI, Open Source, Limits of LLMs, AGI & the Future of AI

You have a model of the system you want to control that can predict the sequence of states corresponding to a sequence of commands. And you're planning a sequence of commands so that, according to your world model, the end state of the system will satisfy an objective that you fix. This is the way...

Lex Fridman Podcast
#416 โ€“ Yann Lecun: Meta AI, Open Source, Limits of LLMs, AGI & the Future of AI

You have a model of the system you want to control that can predict the sequence of states corresponding to a sequence of commands. And you're planning a sequence of commands so that, according to your world model, the end state of the system will satisfy an objective that you fix. This is the way...

Lex Fridman Podcast
#416 โ€“ Yann Lecun: Meta AI, Open Source, Limits of LLMs, AGI & the Future of AI

You have a model of the system you want to control that can predict the sequence of states corresponding to a sequence of commands. And you're planning a sequence of commands so that, according to your world model, the end state of the system will satisfy an objective that you fix. This is the way...

Lex Fridman Podcast
#416 โ€“ Yann Lecun: Meta AI, Open Source, Limits of LLMs, AGI & the Future of AI

Rocket trajectories have been planned since computers have been around, so since the early 60s, essentially.

Lex Fridman Podcast
#416 โ€“ Yann Lecun: Meta AI, Open Source, Limits of LLMs, AGI & the Future of AI

Rocket trajectories have been planned since computers have been around, so since the early 60s, essentially.

Lex Fridman Podcast
#416 โ€“ Yann Lecun: Meta AI, Open Source, Limits of LLMs, AGI & the Future of AI

Rocket trajectories have been planned since computers have been around, so since the early 60s, essentially.

Lex Fridman Podcast
#416 โ€“ Yann Lecun: Meta AI, Open Source, Limits of LLMs, AGI & the Future of AI

Well, so no, you will have to build a specific architecture to allow for hierarchical planning. So hierarchical planning is absolutely necessary if you want to plan complex actions. If I want to go from, let's say, from New York to Paris, it's the example I use all the time, and I'm sitting in my office at NYU, my objective that I need to minimize is my distance to Paris at a high level.

Lex Fridman Podcast
#416 โ€“ Yann Lecun: Meta AI, Open Source, Limits of LLMs, AGI & the Future of AI

Well, so no, you will have to build a specific architecture to allow for hierarchical planning. So hierarchical planning is absolutely necessary if you want to plan complex actions. If I want to go from, let's say, from New York to Paris, it's the example I use all the time, and I'm sitting in my office at NYU, my objective that I need to minimize is my distance to Paris at a high level.

Lex Fridman Podcast
#416 โ€“ Yann Lecun: Meta AI, Open Source, Limits of LLMs, AGI & the Future of AI

Well, so no, you will have to build a specific architecture to allow for hierarchical planning. So hierarchical planning is absolutely necessary if you want to plan complex actions. If I want to go from, let's say, from New York to Paris, it's the example I use all the time, and I'm sitting in my office at NYU, my objective that I need to minimize is my distance to Paris at a high level.

Lex Fridman Podcast
#416 โ€“ Yann Lecun: Meta AI, Open Source, Limits of LLMs, AGI & the Future of AI

a very abstract representation of my location, I would have to decompose this into two sub-goals. First one is go to the airport. Second one is catch a plane to Paris. Okay, so my sub-goal is now going to the airport. My objective function is my distance to the airport. How do I go to the airport? Well, I have to go in the street and hail a taxi, which you can do in New York.