Here is my conversation with Dario Amodei, CEO of Anthropic.Dario is hilarious and has fascinating takes on what these models are doing, why they scale so well, and what it will take to align them.Watch on YouTube. Listen on Apple Podcasts, Spotify, or any other podcast platform. Read the full transcript here. Follow me on Twitter for updates on future episodes.Timestamps(00:00:00) - Introduction(00:01:00) - Scaling(00:15:46) - Language(00:22:58) - Economic Usefulness(00:38:05) - Bioterrorism(00:43:35) - Cybersecurity(00:47:19) - Alignment & mechanistic interpretability(00:57:43) - Does alignment research require scale?(01:05:30) - Misuse vs misalignment(01:09:06) - What if AI goes well?(01:11:05) - China(01:15:11) - How to think about alignment(01:31:31) - Is modern security good enough?(01:36:09) - Inefficiencies in training(01:45:53) - Anthropic’s Long Term Benefit Trust(01:51:18) - Is Claude conscious?(01:56:14) - Keeping a low profile Get full access to Dwarkesh Podcast at www.dwarkesh.com/subscribe
No persons identified in this episode.
This episode hasn't been transcribed yet
Help us prioritize this episode for transcription by upvoting it.
Popular episodes get transcribed faster
Other episodes from Dwarkesh Podcast
Transcribed and ready to explore now
Ilya Sutskever – We're moving from the age of scaling to the age of research
25 Nov 2025
Dwarkesh Podcast
Satya Nadella — How Microsoft is preparing for AGI
12 Nov 2025
Dwarkesh Podcast
Sarah Paine – How Russia sabotaged China's rise
31 Oct 2025
Dwarkesh Podcast
Andrej Karpathy — AGI is still a decade away
17 Oct 2025
Dwarkesh Podcast
Nick Lane – Life as we know it is chemically inevitable
10 Oct 2025
Dwarkesh Podcast
Some thoughts on the Sutton interview
04 Oct 2025
Dwarkesh Podcast