If we worry too much about AI safety, will this make us "lose the race with China"1? (here "AI safety" means long-term concerns about alignment and hostile superintelligence, as opposed to "AI ethics" concerns like bias or intellectual property.) Everything has tradeoffs, regulation vs. progress is a common dichotomy, and the more important you think AI will be, the more important it is that the free world get it first. If you believe in superintelligence, the technological singularity, etc, then you think AI is maximally important, and this issue ought to be high on your mind. But when you look at this concretely, it becomes clear that this is too small to matter - so small that even the sign is uncertain. https://www.astralcodexten.com/p/why-ai-safety-wont-make-america-lose
No persons identified in this episode.
This episode hasn't been transcribed yet
Help us prioritize this episode for transcription by upvoting it.
Popular episodes get transcribed faster
Other episodes from Astral Codex Ten Podcast
Transcribed and ready to explore now
Your Review: Joan of Arc
07 Aug 2025
Astral Codex Ten Podcast
Book Review: Selfish Reasons To Have More Kids
03 Jun 2025
Astral Codex Ten Podcast
Links For February 2025
11 Mar 2025
Astral Codex Ten Podcast
The Emotional Support Animal Racket
28 May 2024
Astral Codex Ten Podcast
The Psychopolitics Of Trauma
27 Jan 2024
Astral Codex Ten Podcast
Book Review: A Clinical Introduction To Lacanian Psychoanalysis
27 Apr 2022
Astral Codex Ten Podcast