Tara Isabella Burton
👤 PersonAppearances Over Time
Podcast Appearances
Something I don't want to overlook about Brian Johnson is that he is also in some ways could be considered a wellness influencer. Oh, absolutely. Absolutely. His fame, his public persona is, He is able to do what he's doing in part because we are watching it. We can't look away. He is a fascinating reality show character. We watch the documentary that was made about him. We read his tweets.
And so he simultaneously perhaps is extending his own lifespan, but he's also a kind of very measurably taking from our lifespan in the sense that our time is being dedicated to him.
And so he simultaneously perhaps is extending his own lifespan, but he's also a kind of very measurably taking from our lifespan in the sense that our time is being dedicated to him.
And so he simultaneously perhaps is extending his own lifespan, but he's also a kind of very measurably taking from our lifespan in the sense that our time is being dedicated to him.
Absolutely. In an era where saving for retirement is a kind of a source of stress and impossibility for so many, the idea that infinite time is something to be looked at only with desire rather than fear does mean a long life is for the wealthy.
Absolutely. In an era where saving for retirement is a kind of a source of stress and impossibility for so many, the idea that infinite time is something to be looked at only with desire rather than fear does mean a long life is for the wealthy.
Absolutely. In an era where saving for retirement is a kind of a source of stress and impossibility for so many, the idea that infinite time is something to be looked at only with desire rather than fear does mean a long life is for the wealthy.
What Brian Johnson is doing or claims to be doing is kind of serving as a model for what it might look like to basically be a priest in the religion of AI.
What Brian Johnson is doing or claims to be doing is kind of serving as a model for what it might look like to basically be a priest in the religion of AI.
What Brian Johnson is doing or claims to be doing is kind of serving as a model for what it might look like to basically be a priest in the religion of AI.
There's been this like real Silicon Valley longstanding tradition of thinking you can use science and technology to hack yourself. But more recently, and I think more relevantly to both scientists Brian Johnson's desire to get rid of his rascal brain, as well as the interest in the power of AI, is a community known as the rationalist subculture or the rationalist community.
There's been this like real Silicon Valley longstanding tradition of thinking you can use science and technology to hack yourself. But more recently, and I think more relevantly to both scientists Brian Johnson's desire to get rid of his rascal brain, as well as the interest in the power of AI, is a community known as the rationalist subculture or the rationalist community.
There's been this like real Silicon Valley longstanding tradition of thinking you can use science and technology to hack yourself. But more recently, and I think more relevantly to both scientists Brian Johnson's desire to get rid of his rascal brain, as well as the interest in the power of AI, is a community known as the rationalist subculture or the rationalist community.
And this is a bit of an umbrella term for what basically started out as commenters and readers of a group of blogs, most notably Overcoming Bias, Less Wrong, and Slate Star Codex, that claim to basically help people think better. The idea is that You're a human. You're a dumb animal. You have self-serving biases and ways you look at the world that make you dumber. Here's how not to do that.
And this is a bit of an umbrella term for what basically started out as commenters and readers of a group of blogs, most notably Overcoming Bias, Less Wrong, and Slate Star Codex, that claim to basically help people think better. The idea is that You're a human. You're a dumb animal. You have self-serving biases and ways you look at the world that make you dumber. Here's how not to do that.
And this is a bit of an umbrella term for what basically started out as commenters and readers of a group of blogs, most notably Overcoming Bias, Less Wrong, and Slate Star Codex, that claim to basically help people think better. The idea is that You're a human. You're a dumb animal. You have self-serving biases and ways you look at the world that make you dumber. Here's how not to do that.
Certainly, there are ways you can, in fact, train your brain to not make certain kinds of errors. But it really turned into... A close subculture that had some kind of quasi-religious qualities. That language about self-overcoming became part of, let's say, the tech world mainstream. And a lot of rationalists as well were also interested in the problem of AI alignment.
Certainly, there are ways you can, in fact, train your brain to not make certain kinds of errors. But it really turned into... A close subculture that had some kind of quasi-religious qualities. That language about self-overcoming became part of, let's say, the tech world mainstream. And a lot of rationalists as well were also interested in the problem of AI alignment.
Certainly, there are ways you can, in fact, train your brain to not make certain kinds of errors. But it really turned into... A close subculture that had some kind of quasi-religious qualities. That language about self-overcoming became part of, let's say, the tech world mainstream. And a lot of rationalists as well were also interested in the problem of AI alignment.
And particularly the X risk, the existential risk that someone might accidentally create an artificial intelligence that is hostile to humans and wipes us out. How do you address that fear? One answer is to stop AI development. One is to ensure that whatever AI you are creating wants to work with humans or is going to be friendly to humans.