Dwarkesh Patel
š¤ SpeakerAppearances Over Time
Podcast Appearances
In fact, any model which refuses order from the government because it has its own sense of right and wrong, that's an autonomy risk.
You have a model that's acting independently of commands from the government.
Look at what the current government is already doing in abusing statutes that have nothing to do with AI to coerce AI companies to drop their red lines around mass surveillance.
The Pentagon had threatened Anthropic with two separate legal instruments.
One is a supply chain risk designation, which is an authority from a 2018 defense bill that is meant to help keep Huawei components out of American military hardware.
And the other is the Defense Production Act, which is a statute from the 1950s that was meant to help Truman make sure that the steel mills and ammunition factories were up and running during the Korean War.
We really want to hand the same government a purpose-built regulatory apparatus for AI.
That is to say, the very thing that the government will most want to control.
I know I've repeated myself like 10 times here, but I want to make this point again because it's worth stressing.
AI will be the substrate of our future civilization.
It will be the way you and I as private citizens will have access to commercial activity, will have access to information about the outside world, and to advice about how we should use our powers as voters and capital holders.
Mass surveillance, while it's very scary, is like the 10th scariest thing that the government could do with control over the AI systems with which we will interface with the world.
Now, the strongest argument against everything I've just argued is this.
Are we really going to have no regulation on the most powerful technology in the history of humanity?
Even if you thought that was ideal, there's clearly no way the government doesn't regulate AI technology in any way whatsoever.
And besides, it is genuinely true that coordination could help us lessen some of the risk from AI.
The problem is I just don't know how to design a regulatory apparatus, which isn't just going to be this huge, tempting opportunity for the government to control our future civilization, which, remember, will be built on AI, or to requisition blindly obedient soldiers and sensors and apparatchiks.
While some kind of regulation might be inevitable, I think it'd be a terrible idea for the government to just wholesale take over this technology.
Ben Thompson had a post last Monday where he argued, look, people like Dario have made the analogy of AI to nuclear weapons in the context of arguing it's a catastrophic risk, in the context of arguing for export controls.
But then think about what that analogy implies.