One of the most common arguments against AI safety is: Here's an example of a time someone was worried about something, but it didn't happen. Therefore, AI, which you are worried about, also won't happen. I always give the obvious answer: "Okay, but there are other examples of times someone was worried about something, and it did happen, right? How do we know AI isn't more like those?" The people I'm arguing with always seem so surprised by this response, as if I'm committing some sort of betrayal by destroying their beautiful argument. The first hundred times this happened, I thought I must be misunderstanding something. Surely "I can think of one thing that didn't happen, therefore nothing happens" is such a dramatic logical fallacy that no human is dumb enough to fall for it. But people keep bringing it up, again and again. Very smart people, people who I otherwise respect, make this argument and genuinely expect it to convince people! Usually the thing that didn't happen is overpopulation, global cooling, etc. But most recently it was some kind of coffeepocalypse: https://www.astralcodexten.com/p/desperately-trying-to-fathom-the
No persons identified in this episode.
This episode hasn't been transcribed yet
Help us prioritize this episode for transcription by upvoting it.
Popular episodes get transcribed faster
Other episodes from Astral Codex Ten Podcast
Transcribed and ready to explore now
Your Review: Joan of Arc
07 Aug 2025
Astral Codex Ten Podcast
Book Review: Selfish Reasons To Have More Kids
03 Jun 2025
Astral Codex Ten Podcast
Links For February 2025
11 Mar 2025
Astral Codex Ten Podcast
The Emotional Support Animal Racket
28 May 2024
Astral Codex Ten Podcast
The Psychopolitics Of Trauma
27 Jan 2024
Astral Codex Ten Podcast
Book Review: A Clinical Introduction To Lacanian Psychoanalysis
27 Apr 2022
Astral Codex Ten Podcast