Adam Brown
👤 PersonAppearances Over Time
Podcast Appearances
Clearly, there are many disanalogies between human intelligence and these large language models.
But I think at the right level of abstraction, it may be the same.
At the moment, these systems tend to be doing more elementary material than that.
They tend to be doing undergraduate-level material.
Yes, I haven't seen anything that jumps out to me like inventing generative TV or even a toy version of that.
But there is in some sense creativity or interpolation required to answer any of these problems where you start with some science problem, you need to recognize that it's analogous to some other thing that you know, and then sort of combine them and then make a mathematical problem out of it and solve that problem.
You know, I think maybe we need to back up to in what sense the humans do or don't think natively in higher dimensions.
Obviously, it's not our natural space.
There was a technology that was invented to think about these things, which was...
you know, notation, tensor notation, various other things that allows you to much, using just even writing as Einstein did 100 years ago, allows you to sort of naturally move between dimensions.
And then you're thinking more about manipulating these mathematical objects than you are about thinking in higher dimensions.
I don't think there's any sense, I mean, in which large language models naturally think in higher dimensions more than humans do.
You could say, well, this large language models have billions of parameters.
That's like a billion dimensional space.
But you could say the same about the human brain, that it has all of these billions of parameters and is therefore billion dimensional.
Whether that fact translates into thinking in...
billions of spatial dimensions.
I don't really see that in the human and I don't think that applies to an LLM either.
I think that's certainly true.
It is definitely seeing more examples than any of us will ever see in our life.