Gary O'Reilly
๐ค SpeakerAppearances Over Time
Podcast Appearances
So wouldn't that require then...
training this neural net on every possible way a bird can manifest in a photo so that it can intuit what a bird might be when a bird is not there.
It really is going on locally, yeah.
Interesting.
Well, what you're describing sounds like putting together a very large puzzle right now.
You know, the kind of puzzles that you put down on the table?
The first thing that you do is you want to find all the edges.
And you build the puzzle inward from finding all the edges.
So straight lines, things like that, they all match up when you're doing a puzzle.
Yeah, that's what they're there for, Professor.
All right, well, now we've got a problem.
So it seems like what you're talking about in the 70s, we could have had what we have today.
We just didn't have the mathematical computing power to make this work.
It sounds like โ and I'm just trying to get my head around what you explained because it sounds to me like there is a cascading relationship to these values and that really what matters are the values that are closest to the next value.
And then there are kind of this cascading reinforcement to say, yes, this is it, or no, it is not.
Am I getting that right?
I'm just trying to figure out what you're saying here in a really plain way.
All right.
That's much less information.
You cleared it up.