Yann LeCun
👤 PersonAppearances Over Time
Podcast Appearances
Are we going to have systems that can learn from video how the world works and learn good representations? Yeah. Before we get them to the scale and performance that we observe in humans, it's going to take quite a while. It's not going to happen in one day. Are we going to get systems that can have large amount of associative memory so they can remember stuff?
Are we going to have systems that can learn from video how the world works and learn good representations? Yeah. Before we get them to the scale and performance that we observe in humans, it's going to take quite a while. It's not going to happen in one day. Are we going to get systems that can have large amount of associative memory so they can remember stuff?
Yeah, but same, it's not going to happen tomorrow. I mean, there is some basic techniques that need to be developed. We have a lot of them, but to get this to work together with a full system is another story. Are we going to have systems that can reason and plan, perhaps along the lines of objective-driven AI architectures that I described before?
Yeah, but same, it's not going to happen tomorrow. I mean, there is some basic techniques that need to be developed. We have a lot of them, but to get this to work together with a full system is another story. Are we going to have systems that can reason and plan, perhaps along the lines of objective-driven AI architectures that I described before?
Yeah, but same, it's not going to happen tomorrow. I mean, there is some basic techniques that need to be developed. We have a lot of them, but to get this to work together with a full system is another story. Are we going to have systems that can reason and plan, perhaps along the lines of objective-driven AI architectures that I described before?
yeah but like before we get this to work you know properly it's going to take a while so and before we get all those things to work together and then on top of this have systems that can learn like hierarchical planning hierarchical representations systems that can be configured for a lot of different situation at hands the way the human brain can you know all of this is going to take you know at least a decade and probably much more because there are
yeah but like before we get this to work you know properly it's going to take a while so and before we get all those things to work together and then on top of this have systems that can learn like hierarchical planning hierarchical representations systems that can be configured for a lot of different situation at hands the way the human brain can you know all of this is going to take you know at least a decade and probably much more because there are
yeah but like before we get this to work you know properly it's going to take a while so and before we get all those things to work together and then on top of this have systems that can learn like hierarchical planning hierarchical representations systems that can be configured for a lot of different situation at hands the way the human brain can you know all of this is going to take you know at least a decade and probably much more because there are
a lot of problems that we're not seeing right now, that we have not encountered. And so we don't know if there is an easy solution within this framework. So, you know, it's not just around the corner. I mean, I've been hearing people for the last 12, 15 years claiming that AGI is just around the corner and being systematically wrong. And I knew they were wrong when they were saying it.
a lot of problems that we're not seeing right now, that we have not encountered. And so we don't know if there is an easy solution within this framework. So, you know, it's not just around the corner. I mean, I've been hearing people for the last 12, 15 years claiming that AGI is just around the corner and being systematically wrong. And I knew they were wrong when they were saying it.
a lot of problems that we're not seeing right now, that we have not encountered. And so we don't know if there is an easy solution within this framework. So, you know, it's not just around the corner. I mean, I've been hearing people for the last 12, 15 years claiming that AGI is just around the corner and being systematically wrong. And I knew they were wrong when they were saying it.
I call their bullshit.
I call their bullshit.
I call their bullshit.
I don't think it's just Moravec's paradox. Moravec's paradox is a consequence of realizing that the world is not as easy as we think. First of all, intelligence is not a linear thing that you can measure with a single number. Can you say that humans are smarter than orangutans? In some ways, yes. But in some ways, orangutans are smarter than humans in a lot of domains.
I don't think it's just Moravec's paradox. Moravec's paradox is a consequence of realizing that the world is not as easy as we think. First of all, intelligence is not a linear thing that you can measure with a single number. Can you say that humans are smarter than orangutans? In some ways, yes. But in some ways, orangutans are smarter than humans in a lot of domains.
I don't think it's just Moravec's paradox. Moravec's paradox is a consequence of realizing that the world is not as easy as we think. First of all, intelligence is not a linear thing that you can measure with a single number. Can you say that humans are smarter than orangutans? In some ways, yes. But in some ways, orangutans are smarter than humans in a lot of domains.
That allows them to survive in the forest, for example.
That allows them to survive in the forest, for example.
That allows them to survive in the forest, for example.