Sarah Walker
π€ SpeakerAppearances Over Time
Podcast Appearances
I think one thing that I feel like I notice much more viscerally than I feel like I hear other people describe is that the representations in our mind and the way that we use language are not the things like... Actually, I mean, this is an important point going back to what Gertl did, but also this idea of signs and symbols and all kinds of ways of separating them. There's like the word, right?
And then there's like what the word means about the world. And we often confuse those things. And what I feel very viscerally, I almost sometimes think I have some kind of like synesthesia for language or something. And I just like don't interact with it like the way that other people do. But for me, words are objects and the objects are not the things that they describe.
And then there's like what the word means about the world. And we often confuse those things. And what I feel very viscerally, I almost sometimes think I have some kind of like synesthesia for language or something. And I just like don't interact with it like the way that other people do. But for me, words are objects and the objects are not the things that they describe.
And then there's like what the word means about the world. And we often confuse those things. And what I feel very viscerally, I almost sometimes think I have some kind of like synesthesia for language or something. And I just like don't interact with it like the way that other people do. But for me, words are objects and the objects are not the things that they describe.
They have like a different ontology to them. Like they're physical things and they carry causation and they can create meaning, but they're not. They're not what we think they are. And also like the internal representations in our mind, like the things I'm seeing about this room are probably, you know, like they're small projection of the things that are actually in this room.
They have like a different ontology to them. Like they're physical things and they carry causation and they can create meaning, but they're not. They're not what we think they are. And also like the internal representations in our mind, like the things I'm seeing about this room are probably, you know, like they're small projection of the things that are actually in this room.
They have like a different ontology to them. Like they're physical things and they carry causation and they can create meaning, but they're not. They're not what we think they are. And also like the internal representations in our mind, like the things I'm seeing about this room are probably, you know, like they're small projection of the things that are actually in this room.
And I think we have such a difficult time moving past the way that we build representations in the mind and the way that we structure our language to realize that those are approximations to what's out there and they're fluid and we can play around with them and we can see deeper structure underneath them that I think like we're missing a lot.
And I think we have such a difficult time moving past the way that we build representations in the mind and the way that we structure our language to realize that those are approximations to what's out there and they're fluid and we can play around with them and we can see deeper structure underneath them that I think like we're missing a lot.
And I think we have such a difficult time moving past the way that we build representations in the mind and the way that we structure our language to realize that those are approximations to what's out there and they're fluid and we can play around with them and we can see deeper structure underneath them that I think like we're missing a lot.
Yeah, for sure. I love this essay by PoincarΓ© about like mathematical creativity, where he talks about this sort of like frothing of all these things. And then like somehow you build theorems on top of it and they become kind of concrete. But like, and I also think about this with language.
Yeah, for sure. I love this essay by PoincarΓ© about like mathematical creativity, where he talks about this sort of like frothing of all these things. And then like somehow you build theorems on top of it and they become kind of concrete. But like, and I also think about this with language.
Yeah, for sure. I love this essay by PoincarΓ© about like mathematical creativity, where he talks about this sort of like frothing of all these things. And then like somehow you build theorems on top of it and they become kind of concrete. But like, and I also think about this with language.
It's like, there's a lot of stuff happening in your mind, but you have to compress it in this few sets of words to try to convey it to someone. So it's, It's a compactification of the space. And it's not a very efficient one. And I think just recognizing that there's a lot that's happening behind language is really important.
It's like, there's a lot of stuff happening in your mind, but you have to compress it in this few sets of words to try to convey it to someone. So it's, It's a compactification of the space. And it's not a very efficient one. And I think just recognizing that there's a lot that's happening behind language is really important.
It's like, there's a lot of stuff happening in your mind, but you have to compress it in this few sets of words to try to convey it to someone. So it's, It's a compactification of the space. And it's not a very efficient one. And I think just recognizing that there's a lot that's happening behind language is really important.
I think this is one of the great things about the existential trauma of large language models, I think, is the recognition that language is not the only thing required. Like there's something underneath it. Not by everybody.
I think this is one of the great things about the existential trauma of large language models, I think, is the recognition that language is not the only thing required. Like there's something underneath it. Not by everybody.
I think this is one of the great things about the existential trauma of large language models, I think, is the recognition that language is not the only thing required. Like there's something underneath it. Not by everybody.
Yeah, I was just gonna say it's like a playground.