Machine Alignment Monday 3/13/23 https://astralcodexten.substack.com/p/why-i-am-not-as-much-of-a-doomer (see also Katja Grace and Will Eden's related cases) The average online debate about AI pits someone who thinks the risk is zero, versus someone who thinks it's any other number. I agree these are the most important debates to have for now. But within the community of concerned people, numbers vary all over the place: Scott Aaronson says says 2% Will MacAskill says 3% The median machine learning researcher on Katja Grace's survey says 5 - 10% Paul Christiano says 10 - 20% The average person working in AI alignment thinks about 30% Top competitive forecaster Eli Lifland says 35% Holden Karnofsky, on a somewhat related question, gives 50% Eliezer Yudkowsky seems to think 90% As written this makes it look like everyone except Eliezer is I go back and forth more than I can really justify, but if you force me to give an estimate it's probably around 33%; I think it's very plausible that we die, but more likely that we survive (at least for a little while). Here's my argument, and some reasons other people are more pessimistic.
No persons identified in this episode.
This episode hasn't been transcribed yet
Help us prioritize this episode for transcription by upvoting it.
Popular episodes get transcribed faster
Other episodes from Astral Codex Ten Podcast
Transcribed and ready to explore now
Your Review: Joan of Arc
07 Aug 2025
Astral Codex Ten Podcast
Book Review: Selfish Reasons To Have More Kids
03 Jun 2025
Astral Codex Ten Podcast
Links For February 2025
11 Mar 2025
Astral Codex Ten Podcast
The Emotional Support Animal Racket
28 May 2024
Astral Codex Ten Podcast
The Psychopolitics Of Trauma
27 Jan 2024
Astral Codex Ten Podcast
Book Review: A Clinical Introduction To Lacanian Psychoanalysis
27 Apr 2022
Astral Codex Ten Podcast