Dwarkesh Patel
š¤ SpeakerAppearances Over Time
Podcast Appearances
The underlying logic for why Anthropic wants these regulations makes sense.
Many of the actions that a lab could take to make AI development safer impose real costs on them.
It could slow them down relative to their competitors.
For example, investing more in
in aligning AI systems rather than just on raw capabilities, enforcing safeguards against using these models to make bioweapons or do cyber attacks, and eventually slowing down the recursive self-improvement loop where AIs are helping design more powerful future systems to a pace where humans can actually stay in the loop rather than just kicking off some kind of uncontrolled singularity.
And these safeguards are meaningless unless the whole industry follows suit, which means that there's a real collective action problem here.
Anthropic has been open about their opinion that they think some sort of extensive and involved regulatory apparatus is needed to control AI.
They wrote in their Frontier Safety Roadmap, quote,
At the most advanced capability levels and risks, the appropriate governance analogy may be closer to nuclear energy or financial regulation than to today's approach to software.
So they're imagining something that looks closer to the Nuclear Regulatory Commission or the Securities and Exchange Commission, but for AI.
Now, I cannot imagine how a regulatory framework built around the kinds of concepts that are used in the AI risk discourse will not be used and abused by a wannabe despot.
The underlying terms here, like catastrophic risk or threats to national security or autonomy risk,
are so vague and so open to interpretation that you're just handing a fully loaded bazooka to a future power-hungry leader.
These terms can mean whatever the government wants them to mean.
Have you built a model that will tell users that the government's policy on tariffs is misguided?
Well, that's a deceptive model.
It's a manipulative model.
You can't deploy it.
Have you built a model that will not assist the government with mass surveillance?
That's a threat to national security.