Menu
Sign In Search Podcasts Charts People & Topics Add Podcast API Pricing
Podcast Image

EA Forum Podcast (All audio)

“Rerunning the Time of Perils” by William_MacAskill

16 Dec 2025

Description

This post is an experiment in two ways that I explain in comments here and here. But you don't need to read those comments before reading this post. 1. Introduction In What We Owe the Future I argued that even very severe non-extinction catastrophes are unlikely to permanently derail civilisation. Even if a catastrophe killed 99.9% of people (leaving 8 million survivors), I'd put the chance we eventually re-attain today's technological level at well over 95%. On the usual longtermist picture, this means such catastrophes are extraordinarily bad for present people but don't rival full existential catastrophes in long-run importance. Either we go extinct (or lock into a permanently terrible trajectory), or we navigate the "time of perils" once and reach existential security. In this article I'll describe a mechanism by which non-extinction global catastrophes would have existential-level importance. I'll define: A catastrophic setback as a catastrophe that causes civilisation to revert to the technological level it had at least 100 years prior. Sisyphus risk as the extra existential risk society incurs from the possibility of catastrophic setbacks. (Though, unlike with Sisyphus's plight, we will not suffer such setbacks indefinitely.) The mechanism is straightforward: if a catastrophic setback occurs [...] ---Outline:(00:19) 1. Introduction(02:59) 2. A Simple Model of Sisyphus Risk(03:03) 2.1. One catastrophic setback(04:40) 2.2. Ords numbers(06:36) 3. What Forms Could Post-AGI Catastrophic Setbacks Take?(06:47) 3.1. Engineered pandemics(07:48) 3.2. Nuclear war and nuclear winter(08:31) 3.3. AI-driven catastrophe(09:01) 3.4. Butlerian backlash and fertility decline(10:22) 4. Would a Post-Setback Society Retain Alignment Knowledge?(11:08) 4.1. Digital fragility(12:44) 4.2. Hardware and software compatibility(13:43) 4.3. Tacit knowledge(14:40) 5. Wont AGI make post-AGI catastrophes essentially irrelevant?(16:02) 6. Implications and Strategic Upshots(16:07) 6.1. The importance of non-AI risks, especially non-AI non-bio(17:06) 6.2. When to donate(18:05) 6.3. A modest argument for more unipolar futures(19:25) 6.4. The value of knowledge preservation and civilisational kernels(20:25) 7. Conclusion(21:38) Appendix: Extensions(21:43) A.1 Multiple cycles(29:26) A.2 Higher or lower risk in the rerun(31:14) A.3 Trajectory change --- First published: December 16th, 2025 Source: https://forum.effectivealtruism.org/posts/krhNwLmsHPRoZeneG/rerunning-the-time-of-perils --- Narrated by TYPE III AUDIO.

Audio
Featured in this Episode

No persons identified in this episode.

Transcription

This episode hasn't been transcribed yet

Help us prioritize this episode for transcription by upvoting it.

0 upvotes
🗳️ Sign in to Upvote

Popular episodes get transcribed faster

Comments

There are no comments yet.

Please log in to write the first comment.