Menu
Sign In Search Podcasts Charts People & Topics Add Podcast API Pricing
Podcast Image

Founder's Story

The AI Expert Governments Call Too Late: Roman Yampolskiy on the Truth We’re Ignoring | Ep 286 with Dr. Roman Yampolskiy

28 Nov 2025

Description

In this episode, Daniel speaks with Dr. Roman Yampolskiy — one of the world’s most cited voices on AI Safety — to confront the most urgent question of our time: What happens when humans build something smarter than themselves? From governments skipping safety to AGI already showing sparks of general intelligence, Yampolskiy offers a rare, unfiltered look at how close we may be to losing control of our own creations. Key Discussion Points: Roman opens with the truth that underpins his entire career: the people building AI don’t actually understand how it works — and they’re not slowing down. He explains how the U.S. government conflated “AI safety” with political correctness topics, entirely missing the existential-risk conversation and accelerating the race with no guardrails. He breaks down why “losing control” won’t look dramatic — the world may appear normal for years as a superintelligence quietly secures resources, learns human behavior, and waits. He explains why AI trained on human data inherits not only our brilliance but our flaws, why Sam Altman understands the risks but can’t slow down, and why AGI is already partially here depending on your definition. Roman dives into job loss, economic abundance, and whether anyone should still go to college. He shares how AI agents differ from tools, why they’re inherently dangerous, and the real threat behind humanoid robots (hint: it’s not their physical bodies). He explores global competition between the U.S. and China, the inevitability of AGI’s rise, and why cooperation is never as simple as people imagine. Daniel steers the conversation into Roman’s personal journey — the sci-fi spark that led him into AI, how cybersecurity pulled him into safety research, and why rising fame has actually damaged his productivity. Roman reveals the bizarre messages he gets from conspiracy theorists and explains the ethical nightmare ahead: If AI becomes conscious, do we owe it rights? Takeaways: Humanity is racing toward a future it doesn’t fully comprehend. While AI may create abundance, cure disease, and automate nearly every job, it also introduces unprecedented existential risks — ones we are not structurally or politically prepared for. Roman emphasizes that controlling superintelligence remains an unsolved problem, and failing to solve it could make humans “irrelevant by default.” Yet he remains hopeful: with enough time and caution, we can still build systems that elevate humanity instead of replacing it. Closing Thoughts: Roman’s wisdom lands as both a warning and a call for clarity. The future of AI isn’t just about innovation — it’s about survival, alignment, and responsibility. And in a world sprinting toward intelligence we can’t undo, voices like his are not optional — they’re essential. Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.

Audio
Featured in this Episode

No persons identified in this episode.

Transcription

This episode hasn't been transcribed yet

Help us prioritize this episode for transcription by upvoting it.

0 upvotes
🗳️ Sign in to Upvote

Popular episodes get transcribed faster

Comments

There are no comments yet.

Please log in to write the first comment.