Menu
Sign In Search Podcasts Charts People & Topics Add Podcast API Pricing
Podcast Image

Doom Debates

Carl Feynman, AI Engineer & Son of Richard Feynman, Says Building AGI Likely Means Human EXTINCTION!

04 Jul 2025

Description

Carl Feynman got his Master’s in Computer Science and B.S. in Philosophy from MIT, followed by a four-decade career in AI engineering.He’s known Eliezer Yudkowsky since the ‘90s, and witnessed Eliezer’s AI doom argument taking shape before most of us were paying any attention!He agreed to come on the show because he supports Doom Debates’s mission of raising awareness of imminent existential risk from superintelligent AI.00:00 - Teaser00:34 - Carl Feynman’s Background02:40 - Early Concerns About AI Doom03:46 - Eliezer Yudkowsky and the Early AGI Community05:10 - Accelerationist vs. Doomer Perspectives06:03 - Mainline Doom Scenarios: Gradual Disempowerment vs. Foom07:47 - Timeline to Doom: Point of No Return08:45 - What’s Your P(Doom)™09:44 - Public Perception and Political Awareness of AI Risk11:09 - AI Morality, Alignment, and Chatbots Today13:05 - The Alignment Problem and Competing Values15:03 - Can AI Truly Understand and Value Morality?16:43 - Multiple Competing AIs and Resource Competition18:42 - Alignment: Wanting vs. Being Able to Help Humanity19:24 - Scenarios of Doom and Odds of Success19:53 - Mainline Good Scenario: Non-Doom Outcomes20:27 - Heaven, Utopia, and Post-Human Vision22:19 - Gradual Disempowerment Paper and Economic Displacement23:31 - How Humans Get Edged Out by AIs25:07 - Can We Gaslight Superintelligent AIs?26:38 - AI Persuasion & Social Influence as Doom Pathways27:44 - Riding the Doom Train: Headroom Above Human Intelligence29:46 - Orthogonality Thesis and AI Motivation32:48 - Alignment Difficulties and Deception in AIs34:46 - Elon Musk, Maximal Curiosity & Mike Israetel’s Arguments36:26 - Beauty and Value in a Post-Human Universe38:12 - Multiple AIs Competing39:31 - Space Colonization, Dyson Spheres & Hanson’s “Alien Descendants”41:13 - What Counts as Doom vs. Not Doom?43:29 - Post-Human Civilizations and Value Function44:49 - Expertise, Rationality, and Doomer Credibility46:09 - Communicating Doom: Missing Mood & Public Receptiveness47:41 - Personal Preparation vs. Belief in Imminent Doom48:56 - Why Can't We Just Hit the Off Switch?50:26 - The Treacherous Turn and Redundancy in AI51:56 - Doom by Persuasion or Entertainment53:43 - Differences with Eliezer Yudkowsky: Singleton vs. Multipolar Doom55:22 - Why Carl Chose Doom Debates56:18 - Liron’s OutroShow NotesCarl’s Twitter — https://x.com/carl_feynmanCarl’s LessWrong — https://www.lesswrong.com/users/carl-feynmanGradual Disempowerment — https://gradual-disempowerment.aiThe Intelligence Curse — https://intelligence-curse.aiAI 2027 — https://ai-2027.comAlcor cryonics — https://www.alcor.orgThe LessOnline Conference — https://less.onlineWatch the Lethal Intelligence Guide, the ultimate introduction to AI x-risk!PauseAI, the volunteer organization I’m part of: https://pauseai.infoJoin the PauseAI Discord — https://discord.gg/2XXWXvErfA — and say hi to me in the #doom-debates-podcast channel!Doom Debates’ Mission is to raise mainstream awareness of imminent extinction from AGI and build the social infrastructure for high-quality debate.Support the mission by subscribing to my Substack at DoomDebates.com and to youtube.com/@DoomDebates Get full access to Doom Debates at lironshapira.substack.com/subscribe

Audio
Featured in this Episode

No persons identified in this episode.

Transcription

This episode hasn't been transcribed yet

Help us prioritize this episode for transcription by upvoting it.

0 upvotes
🗳️ Sign in to Upvote

Popular episodes get transcribed faster

Comments

There are no comments yet.

Please log in to write the first comment.