Tina Eliassi-Rad
👤 PersonAppearances Over Time
Podcast Appearances
Yeah, maybe. Yeah, we didn't do any kind of causal stuff, right? Like a lot of the work, a lot of the hype that's happening now in AI and machine learning, they're all on the correlation side, not on the causation side. So we didn't look at that at all about what causes what. That's very difficult. And I haven't touched the field of causation in part because I'm married to a philosopher.
Yeah, maybe. Yeah, we didn't do any kind of causal stuff, right? Like a lot of the work, a lot of the hype that's happening now in AI and machine learning, they're all on the correlation side, not on the causation side. So we didn't look at that at all about what causes what. That's very difficult. And I haven't touched the field of causation in part because I'm married to a philosopher.
Because every time I try to approach the topic, I just heard nightmares. And so I haven't gone that way yet.
Because every time I try to approach the topic, I just heard nightmares. And so I haven't gone that way yet.
Yeah, I think that there's some of that. I think the best way of using this is perhaps government policy. Right. When government issues a policy and then like maybe 20 years from that, you have if you have good data, you could see, OK, what has been some of the correlations that have come about based on this policy?
Yeah, I think that there's some of that. I think the best way of using this is perhaps government policy. Right. When government issues a policy and then like maybe 20 years from that, you have if you have good data, you could see, OK, what has been some of the correlations that have come about based on this policy?
And then maybe, you know, the actual social scientists and political scientists can then draw some causal diagrams from what we find. Because the one thing is, Usually like from the computer science, AI, machine learning, we treat causation and correlation as if binary, right? As it's like a coin this way or that way. But that is really not the case, right? It's more of a spectrum.
And then maybe, you know, the actual social scientists and political scientists can then draw some causal diagrams from what we find. Because the one thing is, Usually like from the computer science, AI, machine learning, we treat causation and correlation as if binary, right? As it's like a coin this way or that way. But that is really not the case, right? It's more of a spectrum.
And so if you have a model that is producing robust predictions, there is some underlying causal model. You just don't know it. And then maybe that could steer you into the right direction. for that kind of work. But we didn't look at that for this particular work.
And so if you have a model that is producing robust predictions, there is some underlying causal model. You just don't know it. And then maybe that could steer you into the right direction. for that kind of work. But we didn't look at that for this particular work.
Yeah, I am very interested in the feedback that we were talking about and how do we capture that feedback between, for example, when I go and I'm using Amazon and Amazon is making me these recommendations and then I buy things, I tell my friends and then all of that data goes back into Amazon and how much does my contributions or my friends' contributions amplifying what Amazon is doing?
Yeah, I am very interested in the feedback that we were talking about and how do we capture that feedback between, for example, when I go and I'm using Amazon and Amazon is making me these recommendations and then I buy things, I tell my friends and then all of that data goes back into Amazon and how much does my contributions or my friends' contributions amplifying what Amazon is doing?
And so there's some of that going on. And then there's also in terms of like society is a complex system and the place of these tools in these systems. So the tools that help us spread misinformation and disinformation make our society unstable in that then you're not quite sure. what you are reading is true or not, right?
And so there's some of that going on. And then there's also in terms of like society is a complex system and the place of these tools in these systems. So the tools that help us spread misinformation and disinformation make our society unstable in that then you're not quite sure. what you are reading is true or not, right?
So right now with the fires in LA, there's a lot of misinformation and disinformation going on. And it's like, Who do I believe? And maybe like you believe LA Times and you believe, you know, what you read in CA.gov and so on and so forth, but not what you're seeing on Instagram.
So right now with the fires in LA, there's a lot of misinformation and disinformation going on. And it's like, Who do I believe? And maybe like you believe LA Times and you believe, you know, what you read in CA.gov and so on and so forth, but not what you're seeing on Instagram.
And so there's this notion of the place of these AI tools within our society and whether they're making our society better or worse. And by better or worse here, I mean stable versus not stable, more chaotic. And I think we can all agree that we would like to live in societies that are more stable than not, right? So there's some of that that is going on.
And so there's this notion of the place of these AI tools within our society and whether they're making our society better or worse. And by better or worse here, I mean stable versus not stable, more chaotic. And I think we can all agree that we would like to live in societies that are more stable than not, right? So there's some of that that is going on.
And I have a new project along those lines, which actually touches on philosophy, which is called epistemic instability, which is what are some stability conditions of what you know? So if you genuinely know that whales are mammals, no matter what I show you, perhaps I won't be able to convince you that a whale laid an egg. You're like a whale is a mammal and mammals do not lay eggs. Right.
And I have a new project along those lines, which actually touches on philosophy, which is called epistemic instability, which is what are some stability conditions of what you know? So if you genuinely know that whales are mammals, no matter what I show you, perhaps I won't be able to convince you that a whale laid an egg. You're like a whale is a mammal and mammals do not lay eggs. Right.