Scott Alexander
๐ค SpeakerAppearances Over Time
Podcast Appearances
What technical safeguards, in quotes, have been agreed upon?
What happens if the DOW and OpenAI disagree about what version of these safeguards are appropriate?
Does the DOW have options for recourse if OpenAI provides systems with safeguards that the DOW thinks unduly reduces model performance for specific lawful purposes?
And does the agreement specify that the NSA and other intelligence agencies inside of the DOW are excluded from being able to access OpenAI models?
Broader questions about the situation?
What prevents the DOW from later demanding these restrictions be loosened, as it did with Anthropic?
What recourse does OpenAI have if DOW violates the terms of a contract with OpenAI?
What would stop the DOW from retaliating against OpenAI, as they did with Anthropic, if the DOW and OpenAI have disagreements in the future?
Given that existing statements haven't always been clear, and Anthropic has alleged that the contract contains, quote, legalese that would allow those safeguards to be disregarded at will, end quote, we encourage you to read any responses you receive with a sceptical mindset, and ask yourself whether the response is consistent with OpenAI models being used for autonomous weapons systems or domestic mass surveillance in the colloquial sense of the terms.
This is an audio version of Astral Codex X, Scott Alexander's Substack.
Additionally, if you like having an audio version, you can support my work on Patreon at patreon.com.sscpodcast.
To reference this, please link to the original.
To contact me, use astralcodexpodcast at protomail.com.
Thank you for listening, and I'll speak to you next time.
Welcome to the Astral Codex X podcast for the 19th of February, 2026.
Title, Crime as Proxy 4 Disorder.
People hate crime and think it's going up.
But actually, crime barely affects most people and is historically low.
So what's going on?