Menu
Sign In Search Podcasts Charts People & Topics Add Podcast API Pricing
Podcast Image

Doom Debates

Scott Aaronson Makes Me Think OpenAI's “Safety” Is Fake, Clueless, Reckless and Insane

11 Dec 2024

Description

Today I’m reacting to the recent Scott Aaronson interview on the Win-Win podcast with Liv Boeree and Igor Kurganov.Prof. Aaronson is the Director of the Quantum Information Center at the University of Texas at Austin. He’s best known for his research advancing the frontier of complexity theory, especially quantum complexity theory, and making complex insights from his field accessible to a wider readership via his blog.Scott is one of my biggest intellectual influences. His famous Who Can Name The Bigger Number essay and his long-running blog are among my best memories of coming across high-quality intellectual content online as a teen. His posts and lectures taught me much of what I know about complexity theory.Scott recently completed a two-year stint at OpenAI focusing on the theoretical foundations of AI safety, so I was interested to hear his insider account.Unfortunately, what I heard in the interview confirms my worst fears about the meaning of “safety” at today’s AI companies: that they’re laughably clueless at how to achieve any measure of safety, but instead of doing the adult thing and slowing down their capabilities work, they’re pushing forward recklessly.00:00 Introducing Scott Aaronson02:17 Scott's Recruitment by OpenAI04:18 Scott's Work on AI Safety at OpenAI08:10 Challenges in AI Alignment12:05 Watermarking AI Outputs15:23 The State of AI Safety Research22:13 The Intractability of AI Alignment34:20 Policy Implications and the Call to Pause AI38:18 Out-of-Distribution Generalization45:30 Moral Worth Criterion for Humans51:49 Quantum Mechanics and Human Uniqueness01:00:31 Quantum No-Cloning Theorem01:12:40 Scott Is Almost An Accelerationist?01:18:04 Geoffrey Hinton's Proposal for Analog AI01:36:13 The AI Arms Race and the Need for Regulation01:39:41 Scott Aronson's Thoughts on Sam Altman01:42:58 Scott Rejects the Orthogonality Thesis01:46:35 Final Thoughts01:48:48 Lethal Intelligence Clip01:51:42 OutroShow NotesScott’s Interview on Win-Win with Liv Boeree and Igor Kurganov: https://www.youtube.com/watch?v=ANFnUHcYza0Scott’s Blog: https://scottaaronson.blogPauseAI Website: https://pauseai.infoPauseAI Discord: https://discord.gg/2XXWXvErfAWatch the Lethal Intelligence video and check out LethalIntelligence.ai! It’s an AWESOME new animated intro to AI risk.Doom Debates’ Mission is to raise mainstream awareness of imminent extinction from AGI and build the social infrastructure for high-quality debate.Support the mission by subscribing to my Substack at DoomDebates.com and to youtube.com/@DoomDebates. Thanks for watching. Get full access to Doom Debates at lironshapira.substack.com/subscribe

Audio
Featured in this Episode

No persons identified in this episode.

Transcription

This episode hasn't been transcribed yet

Help us prioritize this episode for transcription by upvoting it.

0 upvotes
🗳️ Sign in to Upvote

Popular episodes get transcribed faster

Comments

There are no comments yet.

Please log in to write the first comment.