Menu
Sign In Search Podcasts Charts People & Topics Add Podcast API Pricing
Podcast Image

Curiosity ⇔ Entangled

Scott Aaronson x Zvi Mowshowitz | Why the AI Revolution Won’t Look Like You Expect—And Why That’s More Dangerous

28 Apr 2025

Description

This podcast is produced by volunteers at Accelerator Media, a nonprofit educational media organization. Our work is supported by listeners and viewers like you. If you’d like to help us ignite curiosity and inspire long-term thinking about our shared future, please consider making a donation: https://acceleratormedia.org/donate/In this episode of Curiosity Entangled, theoretical computer scientist Scott Aaronson and writer Zvi Mowshowitz confront one of the biggest questions of our time: what happens when humanity builds tools that can outthink us? In a wide-ranging and unsparing conversation, they explore the realities of AI risk, gradual human disempowerment, the complexities of steering technological progress, and why alignment efforts may fall short when it matters most.Scott and Zvi examine the unique nature of the AI revolution—how it’s different from past technological shifts—and why traditional assumptions about progress and control may no longer apply. They tackle the pitfalls of today’s AI safety approaches, the psychological challenge of thinking clearly about diffuse, slow-moving risks, and the educational, societal, and epistemic shifts that the AI era demands. This is a conversation for anyone grappling with the future of intelligence, agency, and civilization itself.5 Questions This Episode Might Leave You With1. How could AI lead to humanity’s gradual loss of agency without an obvious “takeover” moment?2. Why is it so difficult to steer or slow down transformative technologies once they are unleashed?3. What makes today’s AI fundamentally different from previous technological revolutions?4. Are current AI safety and interpretability efforts enough—or are we fooling ourselves?5. How can we cultivate deeper skepticism, clearer thinking, and better education in the age of AI?Learn more about the guestsScott Aaronson – Professor of Computer Science at the University of Texas at Austin, expert in quantum computing and theoretical foundations of AI alignment. https://scottaaronson.blog/Zvi Mowshowitz – Writer and strategic thinker focusing on decision theory, AI forecasting, and the societal impact of emerging technologies.https://thezvi.substack.com/https://x.com/TheZvihttps://www.balsaresearch.com/Timestamps00:00:50 – Why this technological revolution leaves no obvious human niche00:04:00 – How Zvi’s writing method mirrors real-time information processing00:09:54 – Rethinking AI risk: gradual disempowerment vs. sudden takeover00:14:10 – Why AI disruption is uniquely hard to govern—and harder to discuss00:17:00 – GPT-4o, AI as research assistant, and the shifting cognitive landscape00:21:15 – Why steering is harder than halting in technological revolutions00:26:05 – Verifying claims and detecting “crank” proofs with AI00:34:50 – Concrete examples vs. abstract theorizing about AI risk00:37:10 – Strategic deception: when AIs learn to lie convincingly00:43:50 – Lessons from past technological disruptions—and why AI is different00:50:00 – The future of AI alignment: Scott’s new center at UT Austin00:55:00 – Why pouring cold water on false hope matters for alignment01:00:25 – Out-of-distribution reasoning: what models guess when data is scarce01:11:00 – Education in an AI-saturated world: challenges and possibilities01:17:00 – Learning, motivation, and the loss of intellectual environments01:23:20 – Oscillating extremism, cultural breakdown, and the AI era01:30:00 – Keeping focus: resisting distractions in a world of manufactured outrageFollow Accelerator Media:https://x.com/xceleratormediahttps://instagram.com/xcelerator.media/https://linkedin.com/company/accelerator-media-org

Audio
Featured in this Episode

No persons identified in this episode.

Transcription

This episode hasn't been transcribed yet

Help us prioritize this episode for transcription by upvoting it.

0 upvotes
🗳️ Sign in to Upvote

Popular episodes get transcribed faster

Comments

There are no comments yet.

Please log in to write the first comment.