Scott Alexander
๐ค SpeakerAppearances Over Time
Podcast Appearances
Overall, we can't see how any of OpenAI's claimed methods for enforcing their red lines would work, except possibly if they're allowed to implement technical safeguards that block certain lawful use, which they've shared so little about that we can't evaluate it.
Boaz Barak suggests that this is the case.
If this is right, it's strange that they don't elsewhere stress this as the linchpin of their approach, or show the part of the agreement that guarantees them this ability.
Further clarification on this point would be very helpful.
Questions that you should be asking.
If you have access to OpenAI or DOW decision makers as an employee, journalist or lawmaker, these are questions you should be asking.
Immediate questions about the contract.
First and foremost, ask to see the full contract, as much as you can get.
Scrutinize it yourself or run it by a lawyer in a conversation where attorney-client privilege exists.
Basically, when you're talking with them for the explicitly stated intent of potentially securing their legal counsel, or once you've formally secured them as your legal counsel.
Beyond that... Does OpenAI's definition of fully autonomous weapons include non-edge deployed systems like drones operated remotely by AI systems in the cloud?
If so, what prevents the DOW from using OpenAI models in this way?
The DOW has been insistent that private companies shouldn't dictate how the DOW can use models.
OpenAI says they, quote, retain full control over the safety stack we deploy, end quote.
How are these compatible?
Can you share an excerpt from the agreement that describes OpenAI's control over the safety stack?
Would OpenAI's models assist with bulk analysis of Americans' data purchased from third parties?
Will OpenAI's technical safeguards intentionally block any lawful usage that goes against your red lines?
Who determines if use is unlawful, in quotes?
Does OpenAI have recourse if it believes use is unlawful but the DOW disagrees?