Sarah Walker
๐ค SpeakerAppearances Over Time
Podcast Appearances
Basically, we're the first things that actually are capable of understanding anything. It doesn't matter. It doesn't mean an individual understands everything, but like we have that capability. And so there's not a difference between that and what people talk about with AGI.
In some sense, AGI is a universal explainer, but it might be that a computer is much more efficient at doing, I don't know, prime factorization or something than a human is, but it doesn't mean that it's necessarily smarter or has a broader reach of the kind of things that can understand than a human does. And so I think we really have to think about, is it a level shift
In some sense, AGI is a universal explainer, but it might be that a computer is much more efficient at doing, I don't know, prime factorization or something than a human is, but it doesn't mean that it's necessarily smarter or has a broader reach of the kind of things that can understand than a human does. And so I think we really have to think about, is it a level shift
In some sense, AGI is a universal explainer, but it might be that a computer is much more efficient at doing, I don't know, prime factorization or something than a human is, but it doesn't mean that it's necessarily smarter or has a broader reach of the kind of things that can understand than a human does. And so I think we really have to think about, is it a level shift
Or is it we're enhancing certain kinds of capabilities humans have in the same way that we can enhance eyesight by making telescopes and microscopes? Are we enhancing capabilities we have into technologies and the entire global ecosystem is getting more intelligent? Or is it really that we're building some super machine in a box that's going to be smart and kill everybody?
Or is it we're enhancing certain kinds of capabilities humans have in the same way that we can enhance eyesight by making telescopes and microscopes? Are we enhancing capabilities we have into technologies and the entire global ecosystem is getting more intelligent? Or is it really that we're building some super machine in a box that's going to be smart and kill everybody?
Or is it we're enhancing certain kinds of capabilities humans have in the same way that we can enhance eyesight by making telescopes and microscopes? Are we enhancing capabilities we have into technologies and the entire global ecosystem is getting more intelligent? Or is it really that we're building some super machine in a box that's going to be smart and kill everybody?
That sounds like a science, it's not even a science fiction narrative, it's a bad science fiction narrative. I just don't think it's actually accurate to any of the technologies we're building or the way that we should be describing them. It's not even how we should be describing ourselves.
That sounds like a science, it's not even a science fiction narrative, it's a bad science fiction narrative. I just don't think it's actually accurate to any of the technologies we're building or the way that we should be describing them. It's not even how we should be describing ourselves.
That sounds like a science, it's not even a science fiction narrative, it's a bad science fiction narrative. I just don't think it's actually accurate to any of the technologies we're building or the way that we should be describing them. It's not even how we should be describing ourselves.
Well, these are human questions, right? I don't think they're necessarily questions that we're going to outsource to an artificial intelligence. I think what is happening and will continue to happen is there's a co-evolution between humans and technology that's happening. And we're coexisting in this ecosystem right now, and we're maintaining a lot of the balance. And for
Well, these are human questions, right? I don't think they're necessarily questions that we're going to outsource to an artificial intelligence. I think what is happening and will continue to happen is there's a co-evolution between humans and technology that's happening. And we're coexisting in this ecosystem right now, and we're maintaining a lot of the balance. And for
Well, these are human questions, right? I don't think they're necessarily questions that we're going to outsource to an artificial intelligence. I think what is happening and will continue to happen is there's a co-evolution between humans and technology that's happening. And we're coexisting in this ecosystem right now, and we're maintaining a lot of the balance. And for
The balance to shift to the technology would require some very bad human actors, which is a real risk, or some sort of... I don't know, some sort of dynamic that favors... Like, I just don't know how that plays out without human agency actually trying to put it in that direction.
The balance to shift to the technology would require some very bad human actors, which is a real risk, or some sort of... I don't know, some sort of dynamic that favors... Like, I just don't know how that plays out without human agency actually trying to put it in that direction.
The balance to shift to the technology would require some very bad human actors, which is a real risk, or some sort of... I don't know, some sort of dynamic that favors... Like, I just don't know how that plays out without human agency actually trying to put it in that direction.
So, like, I think the things that are, you know, terrifying are, you know, the ideas of deep fakes or, you know, like... You know, all the kinds of issues that become legal issues about artificial intelligence technologies and using them to control weapons or using them for, you know, like child pornography or. Or like faking out that someone's loved one was kidnapped or killed.
So, like, I think the things that are, you know, terrifying are, you know, the ideas of deep fakes or, you know, like... You know, all the kinds of issues that become legal issues about artificial intelligence technologies and using them to control weapons or using them for, you know, like child pornography or. Or like faking out that someone's loved one was kidnapped or killed.
So, like, I think the things that are, you know, terrifying are, you know, the ideas of deep fakes or, you know, like... You know, all the kinds of issues that become legal issues about artificial intelligence technologies and using them to control weapons or using them for, you know, like child pornography or. Or like faking out that someone's loved one was kidnapped or killed.
There's all kinds of things that are super scary in this landscape and all kinds of new legislation needs to be built and all kinds of guardrails on the technology to make sure that people don't abuse it need to be built. And that needs to happen. And I think one function of sort of the artificial intelligence doomsday space