Dario Amodei
๐ค SpeakerAppearances Over Time
Podcast Appearances
Her reply is, I'd ask them, how did you do it?
How did you evolve, how did you survive this technological adolescence without destroying yourself?
When I think about where humanity is now with AI, about what we're on the cusp of, my mind keeps going back to that scene because the question is so apt for our current situation, and I wish we had the alien's answer to guide us.
I believe we are entering a rite of passage, both turbulent and inevitable, which will test who we are as a species.
Humanity is about to be handed almost unimaginable power, and it is deeply unclear whether our social, political, and technological systems possess the maturity to wield it.
In my essay Machines of Loving Grace, I tried to lay out the dream of a civilization that had made it through to adulthood, where the risks had been addressed and powerful AI was applied with skill and compassion to raise the quality of life for everyone.
I suggested that AI could contribute to enormous advances in biology, neuroscience, economic development, global peace, and work and meaning.
I felt it was important to give people something inspiring to fight for, a task at which both AI accelerationists and AI safety advocates seemed, oddly, to have failed.
But in this current essay, I want to confront the rite of passage itself.
To map out the risks that we are about to face and try to begin making a battle plan to defeat them.
I believe deeply in our ability to prevail, in humanity's spirit and its nobility, but we must face the situation squarely and without illusions.
As with talking about the benefits, I think it is important to discuss risks in a careful and well-considered manner.
In particular, I think it is critical to Avoid doomerism.
Here, I mean doomerism not just in the sense of believing doom is inevitable, which is both a false and self-fulfilling belief, but more generally, thinking about AI risks in a quasi-religious way.
Many people have been thinking in an analytic and sober way about AI risks for many years, but it's my impression that during the peak of worries about AI risk in 2023 to 2024, some of the least sensible voices rose to the top, often through sensationalistic social media accounts.
These voices used off-putting language reminiscent of religion or science fiction and called for extreme actions without having the evidence that would justify them.
It was clear even then that a backlash was inevitable and that the issue would become culturally polarised and therefore gridlocked.
As of 2025-2026, the pendulum has swung, and AI opportunity, not AI risk, is driving many political decisions.
This vacillation is unfortunate, as the technology itself doesn't care about what is fashionable, and we are considerably closer to real danger in 2026 than we were in 2023.
The lesson is that we need to discuss and address risks in a realistic, pragmatic manner, sober, fact-based, and well-equipped to survive changing tides.