Stephen Wolfram
๐ค SpeakerVoice Profile Active
This person's voice can be automatically recognized across podcast episodes using AI voice matching.
Appearances Over Time
Podcast Appearances
Then the question is, can you work out, can you make a model which figures out how long would it take the ball to fall to the ground from the floor you didn't explicitly measure?
And the thing Galileo realized is that you can use math, you can use mathematical formulas to make a model for how long it will take the ball to fall.
So now the question is, well, okay, you want to make a model for, for example, something much more elaborate, like you've got this arrangement of pixels, and is this arrangement of pixels an A or a B?
Does it correspond to something we'd recognize as an A or a B?
And you can make a similar kind, you know, each pixel is like a parameter in some equation, and you could write down this giant equation where the answer is either, you know, one or two, A or B. And the question is then, what kind of a model successfully reproduces the way that we humans would conclude that this is an A and this is a B?
You know, if there's a complicated extra tail on the top of the A, would we then conclude something different?
What is the type of model that maps well into the way that we humans make distinctions about things?
And the big kind of meta discovery is neural nets are such a model.
It's not obvious they would be such a model.
It could be that human distinctions are not captured.
You know, we could try searching around for a type of model that could be a mathematical model.
It could be some model based on something else that captures kind of typical human distinctions about things.
It turns out this model that actually is very much the way that we think the architecture of brains works, that perhaps not surprisingly, that model actually corresponds to the way we make these distinctions.
And so the core next point is that the kind of model, this neural net model makes sort of distinctions and generalizes things in sort of the same way that we humans do it.
And that's why when you say, the cat sat on the green,
blank, even though it didn't see many examples of the cat sat on the green whatever, it can make a, or the aardvark sat on the green whatever, I'm sure that particular sentence does not occur on the internet.
And so it has to make a model that concludes what, you know, it has to kind of generalize from the actual examples that it's seen.
And so, you know, that's the fact is that neural nets generalize in the same kind of a way that we humans do.
If we were, you know, the aliens might look at our neural net generalizations and say, that's crazy.
You know, that thing, when you put that extra little dot on the A, that isn't an A anymore.