Tina Eliassi-Rad
👤 PersonAppearances Over Time
Podcast Appearances
And so right now, the way that we're representing psychology, the knowledge or like what, you know, these things as vectors. Cause I'm a computer scientist.
And so right now, the way that we're representing psychology, the knowledge or like what, you know, these things as vectors. Cause I'm a computer scientist.
Basically how much does this vector space move in one direction versus another? So as you talk with others, so you can build these like kind of simulations, right? Not kind of, you can build these simulations in terms of, in terms of conversations and see how much the vector space shifts.
Basically how much does this vector space move in one direction versus another? So as you talk with others, so you can build these like kind of simulations, right? Not kind of, you can build these simulations in terms of, in terms of conversations and see how much the vector space shifts.
Yeah. So there's a book by Ladyman and Wiesner. And I know that you had James Ladyman on your podcast as well. He's a philosopher at Bristol. And Caroline Wiesner is a mathematician at Potsdam now.
Yeah. So there's a book by Ladyman and Wiesner. And I know that you had James Ladyman on your podcast as well. He's a philosopher at Bristol. And Caroline Wiesner is a mathematician at Potsdam now.
uh about what is a complex system and their book uh that came out i think in 2020 talked about complex systems in terms of features and how there are certain like necessary features and there are certain like emergent features and then there's some functional features where like for example our human brain is a complex system and as you were saying like if it has a shock it adapts and it still perhaps can function unless the shock is like catastrophic
uh about what is a complex system and their book uh that came out i think in 2020 talked about complex systems in terms of features and how there are certain like necessary features and there are certain like emergent features and then there's some functional features where like for example our human brain is a complex system and as you were saying like if it has a shock it adapts and it still perhaps can function unless the shock is like catastrophic
And so what we are not seeing, if we tie this to, for example, the AI models and how they are operating within this system, is we don't know even the role of this AI system, like how much instability is it causing in the system, right? How much feedback is it causing in the system? How much memory does it have? Right. Because they're evolving so quickly that it's not it's not quite clear.
And so what we are not seeing, if we tie this to, for example, the AI models and how they are operating within this system, is we don't know even the role of this AI system, like how much instability is it causing in the system, right? How much feedback is it causing in the system? How much memory does it have? Right. Because they're evolving so quickly that it's not it's not quite clear.
So this is like an open area of study of like going through these different features of a complex system and trying to see, OK, well, how do I measure it for, let's say, a chat GPT? Right.
So this is like an open area of study of like going through these different features of a complex system and trying to see, OK, well, how do I measure it for, let's say, a chat GPT? Right.
In fact, a lot of people say, oh, well, you know, it doesn't have a good memory. based on like what I told it yesterday kind of a thing, right? So memory is one of those features that a complex system has.
In fact, a lot of people say, oh, well, you know, it doesn't have a good memory. based on like what I told it yesterday kind of a thing, right? So memory is one of those features that a complex system has.
Yeah, I think where it comes in, in fact, this is how it links to my new project on epistemic instability, is that it introduces epistemic instability, right? Like when my dad was getting his PhD in America back in the 60s, the most trusted man in America was Walter Cronkite, right? If he said something, you believed him. Now we don't have such a thing, right?
Yeah, I think where it comes in, in fact, this is how it links to my new project on epistemic instability, is that it introduces epistemic instability, right? Like when my dad was getting his PhD in America back in the 60s, the most trusted man in America was Walter Cronkite, right? If he said something, you believed him. Now we don't have such a thing, right?
We don't have a person or an institution where you say, okay, I read it here and I believe it. And then there's also like, depending on where you are on the left or the right, you're like, maybe you believe New York Times, you believe Fox News. And so because of that, I feel like one of the things that we need to do if we value our democracy is teach our kids critical thinking, right?
We don't have a person or an institution where you say, okay, I read it here and I believe it. And then there's also like, depending on where you are on the left or the right, you're like, maybe you believe New York Times, you believe Fox News. And so because of that, I feel like one of the things that we need to do if we value our democracy is teach our kids critical thinking, right?
It's just like, Don't believe what you read or what you hear. Question it, right? Does it make sense? Talk to different people and make your own decision and don't give up your agency. But that's a hard task, right? Thinking is not easy and people don't want to think in the age of TikTok.
It's just like, Don't believe what you read or what you hear. Question it, right? Does it make sense? Talk to different people and make your own decision and don't give up your agency. But that's a hard task, right? Thinking is not easy and people don't want to think in the age of TikTok.