Menu
Sign In Search Podcasts Charts People & Topics Add Podcast API Pricing

Yann LeCun

👤 Person
1086 total appearances

Appearances Over Time

Podcast Appearances

Lex Fridman Podcast
#416 – Yann Lecun: Meta AI, Open Source, Limits of LLMs, AGI & the Future of AI

Yeah, I mean, there were work from various places, but if you want to kind of place it in the GPT timeline, that would be around GPT-2, yeah.

Lex Fridman Podcast
#416 – Yann Lecun: Meta AI, Open Source, Limits of LLMs, AGI & the Future of AI

Yeah, I mean, there were work from various places, but if you want to kind of place it in the GPT timeline, that would be around GPT-2, yeah.

Lex Fridman Podcast
#416 – Yann Lecun: Meta AI, Open Source, Limits of LLMs, AGI & the Future of AI

Yeah, I mean, there were work from various places, but if you want to kind of place it in the GPT timeline, that would be around GPT-2, yeah.

Lex Fridman Podcast
#416 – Yann Lecun: Meta AI, Open Source, Limits of LLMs, AGI & the Future of AI

Well, we're fooled by their fluency, right? We just assume that if a system is fluent in manipulating language, then it has all the characteristics of human intelligence. But that impression is false. We're really fooled by it.

Lex Fridman Podcast
#416 – Yann Lecun: Meta AI, Open Source, Limits of LLMs, AGI & the Future of AI

Well, we're fooled by their fluency, right? We just assume that if a system is fluent in manipulating language, then it has all the characteristics of human intelligence. But that impression is false. We're really fooled by it.

Lex Fridman Podcast
#416 – Yann Lecun: Meta AI, Open Source, Limits of LLMs, AGI & the Future of AI

Well, we're fooled by their fluency, right? We just assume that if a system is fluent in manipulating language, then it has all the characteristics of human intelligence. But that impression is false. We're really fooled by it.

Lex Fridman Podcast
#416 – Yann Lecun: Meta AI, Open Source, Limits of LLMs, AGI & the Future of AI

Alan Turing would decide that the Turing test is a really bad test. Okay. This is what the AI community has decided many years ago, that the Turing test was a really bad test of intelligence.

Lex Fridman Podcast
#416 – Yann Lecun: Meta AI, Open Source, Limits of LLMs, AGI & the Future of AI

Alan Turing would decide that the Turing test is a really bad test. Okay. This is what the AI community has decided many years ago, that the Turing test was a really bad test of intelligence.

Lex Fridman Podcast
#416 – Yann Lecun: Meta AI, Open Source, Limits of LLMs, AGI & the Future of AI

Alan Turing would decide that the Turing test is a really bad test. Okay. This is what the AI community has decided many years ago, that the Turing test was a really bad test of intelligence.

Lex Fridman Podcast
#416 – Yann Lecun: Meta AI, Open Source, Limits of LLMs, AGI & the Future of AI

Hans Moravec would say the Moravec paradox still applies. Okay. Okay. Okay, we can pass.

Lex Fridman Podcast
#416 – Yann Lecun: Meta AI, Open Source, Limits of LLMs, AGI & the Future of AI

Hans Moravec would say the Moravec paradox still applies. Okay. Okay. Okay, we can pass.

Lex Fridman Podcast
#416 – Yann Lecun: Meta AI, Open Source, Limits of LLMs, AGI & the Future of AI

Hans Moravec would say the Moravec paradox still applies. Okay. Okay. Okay, we can pass.

Lex Fridman Podcast
#416 – Yann Lecun: Meta AI, Open Source, Limits of LLMs, AGI & the Future of AI

No, of course, everybody would be impressed. But, you know, it's not a question of being impressed or not. It's the question of knowing what the limit of those systems can do. Again, they are impressive. They can do a lot of useful things. There's a whole industry that is being built around them. They're going to make progress.

Lex Fridman Podcast
#416 – Yann Lecun: Meta AI, Open Source, Limits of LLMs, AGI & the Future of AI

No, of course, everybody would be impressed. But, you know, it's not a question of being impressed or not. It's the question of knowing what the limit of those systems can do. Again, they are impressive. They can do a lot of useful things. There's a whole industry that is being built around them. They're going to make progress.

Lex Fridman Podcast
#416 – Yann Lecun: Meta AI, Open Source, Limits of LLMs, AGI & the Future of AI

No, of course, everybody would be impressed. But, you know, it's not a question of being impressed or not. It's the question of knowing what the limit of those systems can do. Again, they are impressive. They can do a lot of useful things. There's a whole industry that is being built around them. They're going to make progress.

Lex Fridman Podcast
#416 – Yann Lecun: Meta AI, Open Source, Limits of LLMs, AGI & the Future of AI

But there is a lot of things they cannot do and we have to realize what they cannot do and then figure out how we get there. I'm seeing this from basically 10 years of research on the idea of self-supervised learning.

Lex Fridman Podcast
#416 – Yann Lecun: Meta AI, Open Source, Limits of LLMs, AGI & the Future of AI

But there is a lot of things they cannot do and we have to realize what they cannot do and then figure out how we get there. I'm seeing this from basically 10 years of research on the idea of self-supervised learning.

Lex Fridman Podcast
#416 – Yann Lecun: Meta AI, Open Source, Limits of LLMs, AGI & the Future of AI

But there is a lot of things they cannot do and we have to realize what they cannot do and then figure out how we get there. I'm seeing this from basically 10 years of research on the idea of self-supervised learning.

Lex Fridman Podcast
#416 – Yann Lecun: Meta AI, Open Source, Limits of LLMs, AGI & the Future of AI

Actually, that's going back more than 10 years, but the idea of self-supervised learning, so basically capturing the internal structure of a set of inputs without training the system for any particular task, learning representations. You know, the conference I co-founded 14 years ago is called International Conference on Learning Representations.

Lex Fridman Podcast
#416 – Yann Lecun: Meta AI, Open Source, Limits of LLMs, AGI & the Future of AI

Actually, that's going back more than 10 years, but the idea of self-supervised learning, so basically capturing the internal structure of a set of inputs without training the system for any particular task, learning representations. You know, the conference I co-founded 14 years ago is called International Conference on Learning Representations.