Yann LeCun
👤 PersonAppearances Over Time
Podcast Appearances
Yeah, I mean, there were work from various places, but if you want to kind of place it in the GPT timeline, that would be around GPT-2, yeah.
Yeah, I mean, there were work from various places, but if you want to kind of place it in the GPT timeline, that would be around GPT-2, yeah.
Yeah, I mean, there were work from various places, but if you want to kind of place it in the GPT timeline, that would be around GPT-2, yeah.
Well, we're fooled by their fluency, right? We just assume that if a system is fluent in manipulating language, then it has all the characteristics of human intelligence. But that impression is false. We're really fooled by it.
Well, we're fooled by their fluency, right? We just assume that if a system is fluent in manipulating language, then it has all the characteristics of human intelligence. But that impression is false. We're really fooled by it.
Well, we're fooled by their fluency, right? We just assume that if a system is fluent in manipulating language, then it has all the characteristics of human intelligence. But that impression is false. We're really fooled by it.
Alan Turing would decide that the Turing test is a really bad test. Okay. This is what the AI community has decided many years ago, that the Turing test was a really bad test of intelligence.
Alan Turing would decide that the Turing test is a really bad test. Okay. This is what the AI community has decided many years ago, that the Turing test was a really bad test of intelligence.
Alan Turing would decide that the Turing test is a really bad test. Okay. This is what the AI community has decided many years ago, that the Turing test was a really bad test of intelligence.
Hans Moravec would say the Moravec paradox still applies. Okay. Okay. Okay, we can pass.
Hans Moravec would say the Moravec paradox still applies. Okay. Okay. Okay, we can pass.
Hans Moravec would say the Moravec paradox still applies. Okay. Okay. Okay, we can pass.
No, of course, everybody would be impressed. But, you know, it's not a question of being impressed or not. It's the question of knowing what the limit of those systems can do. Again, they are impressive. They can do a lot of useful things. There's a whole industry that is being built around them. They're going to make progress.
No, of course, everybody would be impressed. But, you know, it's not a question of being impressed or not. It's the question of knowing what the limit of those systems can do. Again, they are impressive. They can do a lot of useful things. There's a whole industry that is being built around them. They're going to make progress.
No, of course, everybody would be impressed. But, you know, it's not a question of being impressed or not. It's the question of knowing what the limit of those systems can do. Again, they are impressive. They can do a lot of useful things. There's a whole industry that is being built around them. They're going to make progress.
But there is a lot of things they cannot do and we have to realize what they cannot do and then figure out how we get there. I'm seeing this from basically 10 years of research on the idea of self-supervised learning.
But there is a lot of things they cannot do and we have to realize what they cannot do and then figure out how we get there. I'm seeing this from basically 10 years of research on the idea of self-supervised learning.
But there is a lot of things they cannot do and we have to realize what they cannot do and then figure out how we get there. I'm seeing this from basically 10 years of research on the idea of self-supervised learning.
Actually, that's going back more than 10 years, but the idea of self-supervised learning, so basically capturing the internal structure of a set of inputs without training the system for any particular task, learning representations. You know, the conference I co-founded 14 years ago is called International Conference on Learning Representations.
Actually, that's going back more than 10 years, but the idea of self-supervised learning, so basically capturing the internal structure of a set of inputs without training the system for any particular task, learning representations. You know, the conference I co-founded 14 years ago is called International Conference on Learning Representations.