Mazviita Chirimuuta
๐ค SpeakerAppearances Over Time
Podcast Appearances
I think we're at this moment in science now because we have these tools like LLMs for language and ConvNets and visual neuroscience are being used as predictive models of neuronal responses, which don't have that mathematical legibility that originally, so when I was trained in the field, that people aspired to have.
And so you have this...
possible conflicts, you can either pursue that goal of understanding or you can pursue the goal of prediction, but it seems like you can't have both at the same time.
I think one of the interesting things about this phenomenon, not only of LLMs, but the internet as this idea that it's the repository of all human knowledge, is that it goes along with this idea almost that knowledge doesn't have to be perspectival.
It doesn't have to be like of a place, of a community.
It kind of can float free of the situation in which this knowledge was acquired.
That's kind of the aspiration of these ideas sort of of
a universal repository of knowledge but what this perspectivalist position actually sort of points us to is actually knowledge is inherently of a place of a community we acquire knowledge
not by being completely open-minded to everything that's possible to know, but actually by narrowing our view, discounting possibilities, actually is what allows you to pursue a line of inquiry and actually pin down some information about, say, the natural world, which is humanly achievable.
So the contrast I'm trying to make here is between a view which says that
Knowledge is perspectival.
It's inherently from a human point of view, which means that it's inherently finite.
We cannot aspire to this sort of universal free-floating knowledge because as finite human beings, we can only achieve knowledge of the world through recognizing our limitations.
And this notion of like, you can have non-perspectival knowledge, like everything in the internet,
based on all of the different possible perspectives all blended together, this somehow gives us a God-side view.
LLMs aspire to be this every-person voice, but it's precisely because they don't have a particular socialisation into a finite community that they're not reliable, that we can't pin them down to...
We just look around.
We absorb how things are.
Our knowledge is sort of entirely objective.
It's almost like a God's eye view on reality.