Jaeden Shafer
๐ค SpeakerAppearances Over Time
Podcast Appearances
Obviously, applying it in dispute over AI guardrails is going to be an interesting expansion of how this has been used in the past.
Anthropic has...
basically, I mean, they've been pushing back, so we'll see what happens.
They have some sort of deadline now, but they've for a long time said that they don't want their technology to be used for mass domestic surveillance or for fully autonomous weapon systems.
And they also say that they don't plan to relax any of those restrictions for the US government, even with them asking.
So it's gonna be very interesting to hear what happens here.
And the officials over the Pentagon are arguing basically that military use for AI should be governed by US law and constitutional constraints instead of internal policies of, you know, like a private company, for example.
So, of course, now you have this whole standoff.
There's a lot of ideological tensions here, as many know that listen to the podcast.
David Sachs, who's in the administration, he's the AI advisor, he's publicly criticized Anthropic's safety posture as being overly restrictive.
And Dean Ball of the Foundation of American Innovation said that, you know, so he's taking the other side of this.
He says that the DPA in this context would basically show there's some deeper instability, framing it as the government using economic leverage against a company,
over policy disagreement so there's there's obviously two sides of this argument um i think right now anthropic is in an interesting position and the reason why i think the why it feels like the us government and the department of defense specifically is trying to force anthropic's hand in here right because obviously a free market they'll be like okay fine whatever you don't like you don't want us to be able to work with you we'll go find someone else right maybe google or openai or grok or you know xai like no one of these other alternatives
But apparently there's some reports that say the only frontier AI lab with classified department of defense access right now is anthropic.
So basically the Pentagon has no immediate alternatives.
And we know that they're actively using this because, um, it came out when they did the raid and captured Maduro that they were using anthropic for that whole raid, uh, to go and, uh, properly execute that.
That was the AI model that kind of ran that raid, which was obviously very successful, whether you agree with it or not.
Um,
No American soldiers were killed in that raid and it happened very quickly and efficiently.
So Anthropic seems to be the only vendor right now and this is kind of the problem.