David Boree
๐ค SpeakerAppearances Over Time
Podcast Appearances
If you're telling me we need to be deeply concerned about the welfare of cows that we lock into factory farms, you got me. Absolutely. For sure. If you're telling me I should feel bad about running down a bunch of cops in Grand Theft Auto.
If you're telling me we need to be deeply concerned about the welfare of cows that we lock into factory farms, you got me. Absolutely. For sure. If you're telling me I should feel bad about running down a bunch of cops in Grand Theft Auto.
There's like, this is the, I mean, and he does say like, I don't consider this a main problem, but like the fact that you think this is a problem is, it means that you believe silly things about consciousness. Yeah. Yeah. Anyway, so this is I think the fact that he gets he leads himself here is kind of evidence of the sort of logical fractures that are very common in this community.
There's like, this is the, I mean, and he does say like, I don't consider this a main problem, but like the fact that you think this is a problem is, it means that you believe silly things about consciousness. Yeah. Yeah. Anyway, so this is I think the fact that he gets he leads himself here is kind of evidence of the sort of logical fractures that are very common in this community.
There's like, this is the, I mean, and he does say like, I don't consider this a main problem, but like the fact that you think this is a problem is, it means that you believe silly things about consciousness. Yeah. Yeah. Anyway, so this is I think the fact that he gets he leads himself here is kind of evidence of the sort of logical fractures that are very common in this community.
But this is the guy that Young Ziz is drawn to. She loves this dude. Right. He is kind of her first intellectual heartthrob. And she writes, quote, my primary concern upon learning about the singularity was how do I make this benefit all sentient life, not just humans? So she gets interested in this idea of the singularity. It's inevitable that an AI god is going to arise. And she gets into the
But this is the guy that Young Ziz is drawn to. She loves this dude. Right. He is kind of her first intellectual heartthrob. And she writes, quote, my primary concern upon learning about the singularity was how do I make this benefit all sentient life, not just humans? So she gets interested in this idea of the singularity. It's inevitable that an AI god is going to arise. And she gets into the
But this is the guy that Young Ziz is drawn to. She loves this dude. Right. He is kind of her first intellectual heartthrob. And she writes, quote, my primary concern upon learning about the singularity was how do I make this benefit all sentient life, not just humans? So she gets interested in this idea of the singularity. It's inevitable that an AI god is going to arise. And she gets into the
You know, the rationalist thing of we have to make sure that this is a nice AI rather than a mean one. But she has this other thing to it, which is this AI has to care as much as I do about animal life, right? Otherwise, we're not really making the world better, you know? Right. Now, Tomasek advises her to check out Less Wrong, which is how Ziz starts reading Eliezer Yudkowsky's work.
You know, the rationalist thing of we have to make sure that this is a nice AI rather than a mean one. But she has this other thing to it, which is this AI has to care as much as I do about animal life, right? Otherwise, we're not really making the world better, you know? Right. Now, Tomasek advises her to check out Less Wrong, which is how Ziz starts reading Eliezer Yudkowsky's work.
You know, the rationalist thing of we have to make sure that this is a nice AI rather than a mean one. But she has this other thing to it, which is this AI has to care as much as I do about animal life, right? Otherwise, we're not really making the world better, you know? Right. Now, Tomasek advises her to check out Less Wrong, which is how Ziz starts reading Eliezer Yudkowsky's work.
From there, in 2012, she starts reading up on effective altruism and existential risk, which is a term that means the risk that a super intelligent AI will kill us all. She starts believing in all of this kind of stuff. And โ
From there, in 2012, she starts reading up on effective altruism and existential risk, which is a term that means the risk that a super intelligent AI will kill us all. She starts believing in all of this kind of stuff. And โ
From there, in 2012, she starts reading up on effective altruism and existential risk, which is a term that means the risk that a super intelligent AI will kill us all. She starts believing in all of this kind of stuff. And โ
Her particular belief is that the singularity, when it happens, is going to occur in a flash, kind of like the rapture, and almost immediately lead to the creation of either a hell or a heaven. This will be done by the term they use for this inevitable AI is the singleton. That's what they call the AI god that's going to come about. Her obsession is that
Her particular belief is that the singularity, when it happens, is going to occur in a flash, kind of like the rapture, and almost immediately lead to the creation of either a hell or a heaven. This will be done by the term they use for this inevitable AI is the singleton. That's what they call the AI god that's going to come about. Her obsession is that
Her particular belief is that the singularity, when it happens, is going to occur in a flash, kind of like the rapture, and almost immediately lead to the creation of either a hell or a heaven. This will be done by the term they use for this inevitable AI is the singleton. That's what they call the AI god that's going to come about. Her obsession is that
she has to find a way to make the singleton a nice ai that cares about animals as much as it cares about people right that's her initial big motivation so she starts emailing tomasic with her concerns because she's worried that the other rationalists aren't vegans right and they don't feel like animal welfare is like the top priority for making sure this AI is good.
she has to find a way to make the singleton a nice ai that cares about animals as much as it cares about people right that's her initial big motivation so she starts emailing tomasic with her concerns because she's worried that the other rationalists aren't vegans right and they don't feel like animal welfare is like the top priority for making sure this AI is good.
she has to find a way to make the singleton a nice ai that cares about animals as much as it cares about people right that's her initial big motivation so she starts emailing tomasic with her concerns because she's worried that the other rationalists aren't vegans right and they don't feel like animal welfare is like the top priority for making sure this AI is good.