Scott Alexander
๐ค SpeakerAppearances Over Time
Podcast Appearances
Last Friday, Secretary of War Pete Hegseth declared AI company Anthropic a supply chain risk, the first time this designation has ever been applied to a US company.
The trigger for the move was Anthropic's refusal to allow the Department of War to use their AIs for mass surveillance and autonomous weapons.
A few hours later, Hegseth and Sam Altman declared an agreement in principle for OpenAI's models to be used in the niche vacated by Anthropic.
Altman stated that he had received guarantees that OpenAI's models wouldn't be used for mass surveillance or autonomous weapons either.
But given Hegseth's unwillingness to concede these points with Anthropic, observers speculated that the safeguards in Altman's contract must be weaker, or in a worst-case scenario, completely toothless.
The debate centres on the Department of War's demand that AIs be permitted for all lawful use, in quotes.
Anthropic worried that mass surveillance and autonomous weaponry would de facto fall into this category.
Hegseth and Altman have tried to reassure the public that they won't, and the parts of their agreement that have leaked to the public cite the statutes that Altman expects to constrain this category.
Altman's initial statement seems to suggest additional prohibitions, but on a closer read, provides little tangible evidence of meaningful further restrictions.
Some alert ACX readers have done a deep dive into national security law to try to untangle the situation.
Footnote, they wish to remain anonymous, but none are employees of any major AI lab or the Department of War.
Back to the text.
Their conclusion mirrors that of Anthropic and the majority of Twitter commentators.
This is not enough.
Current laws against domestic mass surveillance and autonomous weapons have wide loopholes in practice.
Further, many of the rules which do exist can be changed by the Department of War at any time.
Although OpenAI's national security lead said that This is not how the contract law usually works, and not how the provision is likely to be enforced.
Therefore, these guarantees are not helpful.
Footnote.
For more, see the section Comments on OpenAI's FAQ.