Yann LeCun
👤 PersonAppearances Over Time
Podcast Appearances
But you'll be able to monitor our progress because we publish our research, right? So, you know, last week we published the Vijepa work, which is sort of a first step towards training systems for video. And then the next step is going to be World models based on this type of idea, training from video.
But you'll be able to monitor our progress because we publish our research, right? So, you know, last week we published the Vijepa work, which is sort of a first step towards training systems for video. And then the next step is going to be World models based on this type of idea, training from video.
But you'll be able to monitor our progress because we publish our research, right? So, you know, last week we published the Vijepa work, which is sort of a first step towards training systems for video. And then the next step is going to be World models based on this type of idea, training from video.
There's similar work at DeepMind also and taking place people and also at UC Berkeley on world models from video. A lot of people are working on this. I think a lot of good ideas are appearing. My bet is that those systems are going to be JEPA-like, they're not going to be generative models. And we'll see what the future will tell.
There's similar work at DeepMind also and taking place people and also at UC Berkeley on world models from video. A lot of people are working on this. I think a lot of good ideas are appearing. My bet is that those systems are going to be JEPA-like, they're not going to be generative models. And we'll see what the future will tell.
There's similar work at DeepMind also and taking place people and also at UC Berkeley on world models from video. A lot of people are working on this. I think a lot of good ideas are appearing. My bet is that those systems are going to be JEPA-like, they're not going to be generative models. And we'll see what the future will tell.
There's really good work at a gentleman called Danezhar Hafner, who is not DeepMind, who's worked on kind of models of this type that learn representations and then use them for planning or learning tasks by reinforcement training. And a lot of work at Berkeley by Peter Abbeel, Sergei Levine, a bunch of other people of that type.
There's really good work at a gentleman called Danezhar Hafner, who is not DeepMind, who's worked on kind of models of this type that learn representations and then use them for planning or learning tasks by reinforcement training. And a lot of work at Berkeley by Peter Abbeel, Sergei Levine, a bunch of other people of that type.
There's really good work at a gentleman called Danezhar Hafner, who is not DeepMind, who's worked on kind of models of this type that learn representations and then use them for planning or learning tasks by reinforcement training. And a lot of work at Berkeley by Peter Abbeel, Sergei Levine, a bunch of other people of that type.
I'm collaborating with, actually, in the context of some grants with my NYU hat. And then collaborations also through Meta, because the lab at Berkeley is associated with Meta in some way, with FAIR. So I think it's very exciting. I think... I'm super excited about... I haven't been that excited about the direction of machine learning and AI since 10 years ago when Fairway started.
I'm collaborating with, actually, in the context of some grants with my NYU hat. And then collaborations also through Meta, because the lab at Berkeley is associated with Meta in some way, with FAIR. So I think it's very exciting. I think... I'm super excited about... I haven't been that excited about the direction of machine learning and AI since 10 years ago when Fairway started.
I'm collaborating with, actually, in the context of some grants with my NYU hat. And then collaborations also through Meta, because the lab at Berkeley is associated with Meta in some way, with FAIR. So I think it's very exciting. I think... I'm super excited about... I haven't been that excited about the direction of machine learning and AI since 10 years ago when Fairway started.
Before that, 30 years ago, we were working on... 35 on convolutional nets and the early days of neural nets. So I'm super excited because I see a path towards... potentially human-level intelligence with systems that can understand the world, remember, plan, reason. There is some set of ideas to make progress there that might have a chance of working. And I'm really excited about this.
Before that, 30 years ago, we were working on... 35 on convolutional nets and the early days of neural nets. So I'm super excited because I see a path towards... potentially human-level intelligence with systems that can understand the world, remember, plan, reason. There is some set of ideas to make progress there that might have a chance of working. And I'm really excited about this.
Before that, 30 years ago, we were working on... 35 on convolutional nets and the early days of neural nets. So I'm super excited because I see a path towards... potentially human-level intelligence with systems that can understand the world, remember, plan, reason. There is some set of ideas to make progress there that might have a chance of working. And I'm really excited about this.
What I like is that somewhat we get onto a good direction and perhaps succeed before my brain turns to a white sauce or before I need to retire.
What I like is that somewhat we get onto a good direction and perhaps succeed before my brain turns to a white sauce or before I need to retire.
What I like is that somewhat we get onto a good direction and perhaps succeed before my brain turns to a white sauce or before I need to retire.
Well, I used to be a hardware guy many years ago. Decades ago.
Well, I used to be a hardware guy many years ago. Decades ago.