Menu
Sign In Search Podcasts Charts People & Topics Add Podcast API Blog Pricing
Podcast Image

LessWrong (Curated & Popular)

"IABIED Book Review: Core Arguments and Counterarguments" by Stephen McAleese

05 Feb 2026

Transcription

Chapter 1: What are the background arguments leading to the book's core claim?

0.082 - 25.887 Stephen McAleese

IAB Book Review. Core Arguments and Counter-Arguments. By Stephen McAleese. Published on January 24, 2026. The recent book If Anyone Builds It Everyone Dies, September 2025, by Eliezer Yudkowsky and Nate Suarez argues that creating super-intelligent AI in the near future would almost certainly cause human extinction. Quote.

0

27.25 - 49.157 Stephen McAleese

If any company or group, anywhere on the planet, builds an artificial superintelligence using anything remotely like current techniques, based on anything remotely like the present understanding of AI, then everyone, everywhere on Earth, will die. End quote. The goal of this post is to summarize and evaluate the book's key arguments and the main counter-arguments critics have made against them.

0

Chapter 2: What is the key claim about ASI alignment and its challenges?

49.812 - 67.941 Stephen McAleese

Although several other book reviews have already been written I found many of them unsatisfying because a lot of them are written by journalists who have the goal of writing an entertaining piece and only lightly cover the core arguments, or don't seem understand them properly, and instead resort to weak arguments like strawmanning, ad hominem attacks or criticizing the style of the book.

0

69.003 - 85.96 Stephen McAleese

So my goal is to write a book review that has the following properties. Written by someone who has read a substantial amount of AI alignment and less wrong content and won't make AI alignment beginner mistakes or misunderstandings, for example not knowing about the orthogonality thesis or instrumental convergence.

0

Chapter 3: Why are human values considered fragile and specific in AI alignment?

87.081 - 106.861 Stephen McAleese

Focuses on deeply engaging solely with the book's main arguments and offering high-quality counter-arguments without resorting to the absurdity heuristic or ad hominem arguments. Covers arguments both for and against the book's core arguments without arguing for a particular view. aims to be truth-seeking, rigorous and rational rather than entertaining.

0

108.423 - 113.928 Stephen McAleese

In other words, my goal is to write a book review that many less wrong readers would find acceptable and interesting.

0

Chapter 4: What is the inner alignment problem and its implications?

114.949 - 136.087 Stephen McAleese

The book's core thesis can be broken down into 4. Claims about how the future of AI is likely to go 1. General intelligence is extremely powerful and potentially dangerous. Intelligence is very powerful and can completely change the world or even destroy it. The existence proof that confirms this belief is the existence of humans.

0

Chapter 5: How do real-world examples illustrate inner misalignment?

136.928 - 157.95 Stephen McAleese

Humans had more general intelligence than other animals and ended up completely changing the world as a result. 2. ASI is possible and likely to be created in the near future. Assuming that current trends continue, humanity will probably create an artificial superintelligence, ASI, that vastly exceeds human intelligence in the 21st century.

0

158.318 - 180.63 Stephen McAleese

Since general intelligence is powerful and is likely to be implemented in AI, AI will have a huge impact on the world in the 21st century. 3. ASI alignment is extremely difficult to solve. Aligning an ASI with human values is extremely difficult and by default an ASI would have strange alien values that are incompatible with human survival and flourishing.

0

180.61 - 191.208 Stephen McAleese

The first ASI to be created would probably be misaligned, not because of malicious intent from its creator, but because its creators would be insufficiently competent enough to align it to human values correctly.

0

Chapter 6: What are the main counterarguments presented against the book's claims?

192.23 - 216.321 Stephen McAleese

For a misaligned ASI would cause human extinction and that would be undesirable. Given claims 1, 2, and 3 the authors predict that humanity's default trajectory is to build a misaligned ASI and that doing so would cause human extinction. the authors consider this outcome to be highly undesirable and an existential catastrophe. Any of the four core claims of the book could be criticized.

0

217.145 - 230.758 Stephen McAleese

Depending on the criticism and perspective, I group the most common perspectives on the future of AI into 4. Camps 1. AI skeptics Believe that high intelligence is overrated or not inherently safe.

0

231.879 - 243.49 Stephen McAleese

For example, some people argue that smart or nerdy people are not especially successful or dangerous, or that computers and LLMs have already surpassed human intelligence in many ways and are not dangerous.

0

243.993 - 254.003 Stephen McAleese

Another criticism in this category is the idea that AIs can be extremely intelligent but never truly want things in the same way that humans do and therefore would always be subservient and harmless.

0

254.742 - 269.72 Stephen McAleese

Others in this camp may accept that general intelligence is powerful and influential but believe that ASI is impossible because the human brain is difficult to replicate, that ASI is very difficult to create, or that ASI is so far away in the future that it's not worth thinking about.

Chapter 7: How do modern AI systems challenge the arguments about ASI alignment?

269.74 - 289.349 Stephen McAleese

2. Singularitarians Singularitarians or AI optimists believe that high general intelligence is extremely impactful and potentially dangerous and ASI is likely to be created in the near future. But they believe the AI alignment problem is sufficiently easy that we don't need to worry about misaligned ASI.

0

290.371 - 301.033 Stephen McAleese

Instead they expect ASI to create a utopian world of material abundance where ASI transforms the world in a mostly desirable way.

0

301.013 - 319.292 Stephen McAleese

The Ayurved view, also known as AI Dumas believe that general intelligence is extremely powerful, ASI is likely to be created in the future, AI alignment is very difficult to solve, and that the default outcome is a misaligned ASI being created that causes human extinction. For AI successionists,

0

Chapter 8: What conclusions can we draw about the future of AI and alignment?

320.065 - 334.923 Stephen McAleese

Finally AI successionists believe that the AI alignment problem is irrelevant. If misaligned ASI is created and causes human extinction it doesn't matter because it would be a successor species with its own values just as humans are a successor species to chimpanzees.

0

335.984 - 346.917 Stephen McAleese

They believe that increasing intelligence is the universe's natural development path that should be allowed to continue even if it results in human extinction. There's an image here with the caption

0

347.825 - 354.654 Unknown

Flowchart showing the beliefs of AI skeptics, singularitarians, the iAbide authors and AI successionists.

0

355.255 - 373.478 Stephen McAleese

I created a flowchart to illustrate how different beliefs about the future of AI lead to different camps which each have a distinct worldview. Given the impact of humans on the world and rapid AI progress, I don't find the arguments of AI skeptics compelling and I believe the most knowledgeable thinkers and sophisticated critics are generally not in this camp.

0

374.234 - 382.251 Stephen McAleese

that AI successionist camp complicates things because they say that human extinction is not equivalent to an undesirable future where all value is destroyed.

383.333 - 402.207 Stephen McAleese

It's an interesting perspective but I won't be covering it in this review because it seems like a niche view, it's only briefly covered by the book, and discussing it involves difficult philosophical problems like whether AI could be conscious. This review focuses on the third core claim above. The belief that the AI alignment problem is very difficult to solve.

403.309 - 421.116 Stephen McAleese

I'm focusing on this claim because I think the other three are fairly obvious or are generally accepted by people who have seriously thought about this topic. AI is likely to be an extremely impactful technology in the future, ASI is likely to be created in the near future, and human extinction is undesirable.

421.417 - 437.016 Stephen McAleese

I'm focusing on the third core claim, the idea that the AI alignment problem is difficult, because it seems to be the claim that is most contested by sophisticated critics. Also many of the book's recommendations such as pausing ASI development are conditional on this claim being true.

438.138 - 454.849 Stephen McAleese

If ASI alignment is extremely difficult, we should stop ASI progress to avoid creating an ASI which would be misaligned with high probability and catastrophic for humanity in expectation. If AI alignment is easy, we should build an ASI to bring about a futuristic utopia.

Comments

There are no comments yet.

Please log in to write the first comment.