Tina Eliassi-Rad
👤 PersonAppearances Over Time
Podcast Appearances
I think there was less distraction for sure, right, than it is now. I think the dopamine hit that we get by just scrolling through Instagram, TikTok, et cetera, is something that has been studied. And, you know, I'm not a psychologist or a cognitive scientist, but that people, it's just like you let your brain go to mush and you just like spend hours on it instead of,
I think there was less distraction for sure, right, than it is now. I think the dopamine hit that we get by just scrolling through Instagram, TikTok, et cetera, is something that has been studied. And, you know, I'm not a psychologist or a cognitive scientist, but that people, it's just like you let your brain go to mush and you just like spend hours on it instead of,
maybe actually sitting quietly and thinking about a problem, you know, it's boring, you know?
maybe actually sitting quietly and thinking about a problem, you know, it's boring, you know?
Yeah, in fact, that's such a perfect thing. I always say to my students, what is your objective function? Because we all have an objective function, and that objective function changes over time. And perhaps if all of us just think, okay, did my objective function change from yesterday or from last month or whatever? You know, it would be helpful for society.
Yeah, in fact, that's such a perfect thing. I always say to my students, what is your objective function? Because we all have an objective function, and that objective function changes over time. And perhaps if all of us just think, okay, did my objective function change from yesterday or from last month or whatever? You know, it would be helpful for society.
So as a computer scientist, as a machine learning person, I always think about objective functions. And in fact, I cannot look at a mountain range now and not think, OK, if you drop me there, will I find the peak or not? The global peak? Probably not. But, you know, like, please drop me at a nice place.
So as a computer scientist, as a machine learning person, I always think about objective functions. And in fact, I cannot look at a mountain range now and not think, OK, if you drop me there, will I find the peak or not? The global peak? Probably not. But, you know, like, please drop me at a nice place.
So the gradient is with me.
So the gradient is with me.
I mean, as an introvert, I'm fine with it. But yeah, no, I think we see this in society now where like people aren't,
I mean, as an introvert, I'm fine with it. But yeah, no, I think we see this in society now where like people aren't,
um as good as interacting with other people or they're not as not as courteous to other people perhaps as before i don't know maybe i'm out of an age now where i'm like oh yeah people are not as courteous as they were before um but you know the more you interact with people the better you get at them unless you interact with them the worse you get at them and so if we don't put a premium on like oh look like tina can't actually pick up the
um as good as interacting with other people or they're not as not as courteous to other people perhaps as before i don't know maybe i'm out of an age now where i'm like oh yeah people are not as courteous as they were before um but you know the more you interact with people the better you get at them unless you interact with them the worse you get at them and so if we don't put a premium on like oh look like tina can't actually pick up the
As opposed to just sending a zillion emails or text messages. I think there's a value to that. And I think there is this notion of trust. Even the most introvert among us, there are a few people that we do trust. And so if it comes to a point where you trust an AI system that we don't know how it works and that it's vulnerable to attacks, then that is a problem, right?
As opposed to just sending a zillion emails or text messages. I think there's a value to that. And I think there is this notion of trust. Even the most introvert among us, there are a few people that we do trust. And so if it comes to a point where you trust an AI system that we don't know how it works and that it's vulnerable to attacks, then that is a problem, right?
And so, in fact, this gets us to this phrase called the red teaming that we hear all the time now that, oh, well, don't worry about it. They will red team it. And so the phrase red teaming came from the Cold War era, right? So the Soviet Union, the red team, America, the blue team, right? So, and there was a lot of this red team, blue teaming, for example, for cybersecurity, right?
And so, in fact, this gets us to this phrase called the red teaming that we hear all the time now that, oh, well, don't worry about it. They will red team it. And so the phrase red teaming came from the Cold War era, right? So the Soviet Union, the red team, America, the blue team, right? So, and there was a lot of this red team, blue teaming, for example, for cybersecurity, right?
But this phrase red teaming is not well defined when it comes to these generative AI systems. And my friend and colleague, Professor Hoda Hedari at Carnegie Mellon has written extensively about this because there's no guarantee, right? So you cannot guarantee that somebody cannot jailbreak chat GPT. And jailbreaking is basically that chat GPT has put in some information kind of guardrails, right?
But this phrase red teaming is not well defined when it comes to these generative AI systems. And my friend and colleague, Professor Hoda Hedari at Carnegie Mellon has written extensively about this because there's no guarantee, right? So you cannot guarantee that somebody cannot jailbreak chat GPT. And jailbreaking is basically that chat GPT has put in some information kind of guardrails, right?