Ruby
👤 SpeakerAppearances Over Time
Podcast Appearances
They are also often impatient and upset when they do not obtain the things they desire immediately.
A hallmark of adulthood is the ability to wait, the ability to delay gratification.
I think that is what we need to do with AI.
To be clear, I still believe in the visions I wrote in Machines of Loving Grace, that is still my goal.
However, I think this goal requires patience from me, from anthropic, and from human civilization.
We cannot rush into societal changes of this magnitude without adequate preparation.
While in general the logic holds that more cautious and responsible actors ought to win in the AI race, it is necessary to accurately locate the finishing line.
We think that at this time the industry may be racing in the wrong direction, possibly off a cliff and into a volcano, and that is not a race I wish to win.
Nor do I wish for any others to win such a race to the bottom.
To clarify, we think that on the current trajectory, anyone who creates a truly powerful AI will get a country of geniuses in a data center as I described, but will be risking that country not sharing their values and not taking instructions.
We think this is surmountable and have approaches to explore, but it will take an unclear amount of time.
I do not want either an authoritarian or democratic regime to unleash an unfriendly country of geniuses, but nothing good happens if I do it first.
We will lead by example and demonstrate with our actions that this our sincere belief.
We have not stopped work, but we are being intentional about which work we do, and realistic about the bottlenecks and challenges required to achieve loving grace.
There's an image here.
In short, Dario Amodei says he doesn't want to race off a cliff and into a volcano.
And he intends for Anthropic to lead by example.
Jack Clark, Anthropic co-founder and head of policy elaborates on the plan.
At a practical level, in many ways it doesn't matter what others do, we don't want to take actions with regret, we don't want to pull a trigger at ourselves.
But at the same time, we are sending a clear signal to other labs, to the US government, world governments, foreign powers, and the public that the promise of AI is very great and so are the risks.