Fei Fei Li
👤 PersonAppearances Over Time
Podcast Appearances
Exactly. So just to finish, so the neuroscientists were studying the structure of the mammalian brain and how that visual information was processed. Fast forward, that study got the Nobel Prize in the 1980s because it's such a fundamental discovery. But that inspired computer scientists.
Exactly. So just to finish, so the neuroscientists were studying the structure of the mammalian brain and how that visual information was processed. Fast forward, that study got the Nobel Prize in the 1980s because it's such a fundamental discovery. But that inspired computer scientists.
So there is a separate small group of computer scientists who are starting to build algorithms inspired by this hierarchical information processing architecture.
So there is a separate small group of computer scientists who are starting to build algorithms inspired by this hierarchical information processing architecture.
So there is a separate small group of computer scientists who are starting to build algorithms inspired by this hierarchical information processing architecture.
No, it's a whole algorithm, but you build mathematical functions that are layered.
No, it's a whole algorithm, but you build mathematical functions that are layered.
No, it's a whole algorithm, but you build mathematical functions that are layered.
So you can have one small function that process brightness, another that process curvature. I'm being schematic. And then you process the information. But what was really interesting of this approach is that in the early 80s, this neural network approach found a learning rule. So suddenly it unlocked how to learn this automatically without hand code. It's called backpropagation.
So you can have one small function that process brightness, another that process curvature. I'm being schematic. And then you process the information. But what was really interesting of this approach is that in the early 80s, this neural network approach found a learning rule. So suddenly it unlocked how to learn this automatically without hand code. It's called backpropagation.
So you can have one small function that process brightness, another that process curvature. I'm being schematic. And then you process the information. But what was really interesting of this approach is that in the early 80s, this neural network approach found a learning rule. So suddenly it unlocked how to learn this automatically without hand code. It's called backpropagation.
And also Jeff Hinton, along with others who have discovered this, was awarded the Nobel Prize last week for this. But that is the algorithm neural network.
And also Jeff Hinton, along with others who have discovered this, was awarded the Nobel Prize last week for this. But that is the algorithm neural network.
And also Jeff Hinton, along with others who have discovered this, was awarded the Nobel Prize last week for this. But that is the algorithm neural network.
You could actually.
You could actually.
You could actually.
You just keep filtering it. Of course, you combine it in mathematically very intricate way, but it is like layers of filtration a little bit.
You just keep filtering it. Of course, you combine it in mathematically very intricate way, but it is like layers of filtration a little bit.
You just keep filtering it. Of course, you combine it in mathematically very intricate way, but it is like layers of filtration a little bit.