Dario Amodei
๐ค SpeakerAppearances Over Time
Podcast Appearances
Acknowledge uncertainty.
There are plenty of ways in which the concerns I'm raising in this piece could be moot.
Nothing here is intended to communicate certainty or even likelihood.
Most obviously, AI may simply not advance anywhere near as fast as I imagine.
Or, even if it does advance quickly, some or all of the risks discussed here may not materialize, which would be great or there may be other risks I haven't considered.
No one can predict the future with complete confidence, but we have to do the best we can to plan anyway.
Intervene as surgically as possible.
Addressing the risks of AI will require a mix of voluntary actions taken by companies and private third-party actors and actions taken by governments that bind everyone.
The voluntary actions, both taking them and encouraging other companies to follow suit, are a no-brainer for me.
I firmly believe that government actions will also be required to some extent, but these interventions are different in character because they can potentially destroy economic value or coerce unwilling actors who are skeptical of these risks, and there is some chance they are right.
It's also common for regulations to backfire or worsen the problem they are intended to solve, and this is even more true for rapidly changing technologies.
It's thus very important for regulations to be judicious.
They should seek to avoid collateral damage, be as simple as possible, and impose the least burden necessary to get the job done.
It is easy to say no action is too extreme when the fate of humanity is at stake, but in practice this attitude simply leads to backlash.
To be clear, I think there's a decent chance we eventually reach a point where much more significant action is warranted, but that will depend on stronger evidence of imminent, concrete danger than we have today, as well as enough specificity about the danger to formulate rules that have a chance of addressing it.
The most constructive thing we can do today is advocate for limited rules while we learn whether or not there is evidence to support stronger ones.
With all that said, I think the best starting place for talking about AI's risks is the same place I started from in talking about its benefits.
By being precise about what level of AI we are talking about.
The level of AI that raises civilizational concerns for me is the powerful AI that I described in Machines of Loving Grace.
I'll simply repeat here the definition that I gave in that document.