Dario Amodei
๐ค SpeakerAppearances Over Time
Podcast Appearances
I can only say that I am focused day and night on how to steer us away from these negative outcomes and towards the positive ones, and in this essay I talk in great detail about how best to do so.
I think the best way to get a handle on the risks of AI is to ask the following question.
Suppose a literal country of geniuses were to materialize somewhere in the world in roughly 2027.
Imagine, say, 50 million people, all of whom are much more capable than any Nobel Prize winner, statesman, or technologist.
The analogy is not perfect because these geniuses could have an extremely wide range of motivations and behavior from completely pliant and obedient to strange and alien in their motivations.
But sticking with the analogy for now, suppose you were the national security advisor of a major state responsible for assessing and responding to the situation.
Imagine, further, that because AI systems can operate hundreds of times faster than humans, this country is operating with a time advantage relative to all other countries.
For every cognitive action we can take, this country can take 10.
What should you be worried about?
I would worry about the following things.
1.
Autonomy risks.
What are the intentions and goals of this country?
Is it hostile, or does it share our values?
Could it militarily dominate the world through superior weapons, cyber operations, influence operations, or manufacturing?
2.
Misuse for destruction Assume the new country is malleable and follows instructions, and thus is essentially a country of mercenaries.
Could existing rogue actors who want to cause destruction, such as terrorists, use or manipulate some of the people in the new country to make themselves much more effective, greatly amplifying the scale of destruction?
3.
Misuse for seizing power What if the country was in fact built and controlled by an existing powerful actor, such as a dictator or rogue corporate actor?