Scott Alexander
๐ค SpeakerAppearances Over Time
Podcast Appearances
Some types of fully autonomous weapons are clearly appropriate today, for example some missile defences for navy ships.
Many more will plausibly have to be developed in the future, especially if other countries pursue them.
But a good system of checks and balances for them does not yet exist.
AI companies should take care not to sign a contract that could require them to build systems without adequate safeguards, akin to the safeguards of a soldier's judgement and respect for the constitution.
Footnote.
These safeguards might initially have to be broader than legal use, since current law is not yet designed with powerful autonomous systems in mind.
Back to the text.
Comments on OpenAI's FAQ.
OpenAI provided an FAQ, which we think is misleading.
While we aren't lawyers, we've done our best to lay out our reasoning for this belief and have also consulted with an expert in national security law on the excerpt of the contract provided in OpenAI's announcement and checked that their views were consistent with ours.
Will this deal enable the Department of War to use OpenAI models to power autonomous weapons?
No.
Based on our safety stack, our cloud-only deployment, the contract language and existing laws, regulation and policy, we are confident that this cannot happen.
We will also have OpenAI personnel in the loop for additional assurance."
Since the law straightforwardly permits autonomous weapons, and the contract permits any autonomous weapons allowed by the law, the contract language and existing laws, regulation and policy does nothing to prohibit this.
OpenAI hasn't shared enough information about their safety stack for us to be able to evaluate that claim.
See below for comments on cloud-only deployment.
Our national security law expert was also very sceptical of the idea that the DOW would have OpenAI personnel meaningfully in the loop in sensitive contexts.
Will this deal enable the Department of War to use OpenAI models to conduct mass surveillance on US persons?
No.