Menu
Sign In Search Podcasts Charts People & Topics Add Podcast API Blog Pricing
Transcription

Chapter 1: What is the main topic discussed in this episode?

0.031 - 5.842 Nathaniel Whittemore

Today on the AI Daily Brief, we are discussing a question that is extremely easy to ask and much more difficult to answer.

0

Chapter 2: What is the main question surrounding AI control?

5.902 - 31.453 Nathaniel Whittemore

Who controls AI? The AI Daily Brief is a daily podcast and video about the most important news and discussions in AI. All right, friends, quick announcements before we dive in. First of all, thank you to today's sponsors, KPMG, InsightWise, AIUC, and Blitzy. To get an ad-free version of the show, go to patreon.com slash AI Daily Brief, or you can subscribe on Apple Podcasts.

0

31.953 - 45.916 Nathaniel Whittemore

To learn more about sponsoring the show, send us a note at sponsors at ai-dailybrief.ai. While you're on AI Daily Brief.ai, you can also find out about the other projects in the AIDB ecosystem, including Claw Camp, Enterprise Claw, registration for which is going on right now.

0

46.016 - 59.356 Nathaniel Whittemore

Basically, if your enterprise wants to learn how to build agents and agent teams, or just more podcast related stuff like subscribing to the newsletter, which is newly rebooted. Now, if you've been listening this week, you'll know that we had something of a time of it getting back from South America.

0

Chapter 3: What events led to the conflict between Anthropic and the Pentagon?

59.896 - 72.632 Nathaniel Whittemore

Door to door, it ended up being about 55 hours, and that didn't include the seven hours that it took me to go drop off the rental car and pick up our old car, which was sitting at the airport parking lot. In any case, because of that, I had to miss Wednesday's show, not something that I do very lightly.

0

Chapter 4: What are Anthropic's red lines regarding AI use?

72.953 - 87.049 Nathaniel Whittemore

And so as a makeup, I had slated to do an extra show over the weekend on the day that I'm usually off. As it turns out, this was a pretty opportune week to have that slot open, because my goodness, as Ron Burgundy would say, boy, that escalated quickly.

0

87.069 - 104.828 Nathaniel Whittemore

I'm referring, of course, to the skirmish-turned-all-out war between Anthropic and the Pentagon that came to a crescendo and a head on Friday night. The TLDR of what happened is that not only did the Trump administration decide to decline to work with Anthropic, they are attacking them in ways that go far beyond just declining to do business with them.

0

105.061 - 122.993 Nathaniel Whittemore

Now, for the necessary background and to get caught up with the story from where we left it, we actually have to go back to Thursday, when Anthropic CEO Dario Amadei released a statement about the dispute. Earlier in the week, you'll remember, Defense Secretary Pete Hegseth had given Amadei an ultimatum, remove terms of use limits by Friday or be blacklisted from the entire military supply chain.

0

123.534 - 138.536 Nathaniel Whittemore

Anthropic's red lines were that Claude should not be used for domestic surveillance of Americans or for powering autonomous weapons. Their stated view was that Claude is not reliable enough to power autonomous weaponry and that AI surveillance is undemocratic and, perhaps more pertinently, has underdeveloped legal safeguards.

0

139.216 - 143.502 Nathaniel Whittemore

The White House's position, meanwhile, was that a technology company should not be dictating how the U.S.

Chapter 5: How did the White House respond to Anthropic's stance?

143.522 - 159.672 Nathaniel Whittemore

government uses that technology and should be fine accepting terminology that allows the U.S. government to use it for all legal uses. Dario's post from Thursday begins, I believe deeply in the existential importance of using AI to defend the United States and other democracies and to defeat our autocratic adversaries.

0

159.854 - 173.108 Nathaniel Whittemore

And it is worth noting here, especially if and as this conversation gets caught up in broader partisan talking points, historically speaking, Anthropic has been more vocal about things like China not having access to advanced technology than some of their peers.

0

173.669 - 191.196 Nathaniel Whittemore

Whereas some of the other AI companies have been either fine with or actively lobbying for the ability to sell into China, think specifically around NVIDIA and advanced chips, Amodei and Anthropic have been consistent that they think that is a very, very bad idea. Point being, at least based on the history, Anthropic is not a pacifist organization.

0

191.957 - 210.534 Nathaniel Whittemore

Now, in the blog post, Amadei continued, Anthropic understands that the Department of War, not private companies, make military decisions. We've never raised objections to particular military operations, nor attempted to limit use of our technology in an ad hoc manner. However, in a narrow set of cases, we believe AI can undermine rather than defend democratic values.

0

211.215 - 226.682 Nathaniel Whittemore

Some uses are also simply outside the bounds of what today's technology can safely and reliably do. Two such use cases have never been included in our contracts with the Department of War, and we believe they should not be included now. He then restates Anthropic's objections to mass domestic surveillance and fully autonomous weapons.

226.915 - 244.637 Nathaniel Whittemore

Now when it comes to those exceptions, he says, to our knowledge, those two exceptions have not been a barrier to accelerating the adoption and use of our models within our armed forces to date. Then in one of the spicier sections, he writes, the Department of War has stated they will only contract with AI companies who accede to any lawful use and remove safeguards in the cases mentioned above.

244.617 - 262.741 Nathaniel Whittemore

They have threatened to remove us from their systems if we maintain these safeguards. They have also threatened to designate us as a supply chain risk, a label reserved for U.S. adversaries never before applied to an American company, and to invoke the Defense Production Act to force the safeguards removal. These latter two threats are inherently contradictory. One labels us as a security risk.

263.041 - 271.232 Nathaniel Whittemore

The other labels Claude as essential to national security. Regardless, he says, these threats do not change our position. We cannot in good conscience accede to their request.

Chapter 6: What were the implications of Trump's directive on Anthropic?

272.039 - 287.279 Nathaniel Whittemore

Now, it is very clear that this public statement did not make Anthropic any friends in the White House. Assistant to the Secretary of War for Public Affairs, Sean Parnell, was diplomatic but clear. The Department of War has no interest in using AI to conduct mass surveillance of Americans, which is illegal.

0

287.78 - 299.436 Nathaniel Whittemore

Nor do we want to use AI to develop autonomous weapons that operate without human involvement. This narrative is fake and being peddled by leftists in the media. Here's what we are asking. Allow the Pentagon to use Anthropic's model for all lawful purposes.

0

300.016 - 314.475 Nathaniel Whittemore

This is a simple, common-sense request that will prevent Anthropic from jeopardizing critical military operations and potentially putting our warfighters at risk. We will not let any company dictate the terms regarding how we make operational decisions. They have until 5.01pm on Friday to decide.

0

314.935 - 329.534 Nathaniel Whittemore

Otherwise, we will terminate our partnership with Anthropic and deem them a supply chain risk for the Department of War. former Uber official and Undersecretary of War for Research and Engineering, Emile Michael, was not so diplomatic. He wrote, It's a shame that Dario Amadei is a liar and has a God complex.

0

Chapter 7: How did OpenAI's position differ from Anthropic's?

329.775 - 344.736 Nathaniel Whittemore

He wants nothing more than to try to personally control the U.S. military and is okay putting our nation's safety at risk. The Department of War will always adhere to the law but not bend to whims for any one for-profit tech company. Now, coming into Friday, it seemed like the court of public opinion was sort of leaning in Anthropic's favor.

0

345.396 - 363.299 Nathaniel Whittemore

More than 200 Google and OpenAI staff signed a petition that supported Anthropic's red lines, which you can find at notdivided.org. And you even saw a bunch of comments like this one on that post from Sean Parnell. Hi, Sean. Just FYI, nobody believes this and it comes off as ingenuine. I'm generally a conservative-leaning voter. I'm also pretty tech-forward. I am wildly against this.

0

363.82 - 381.963 Nathaniel Whittemore

Reminder that the entire tech lobby flipped on Biden for the exact same reason in May 2024. So that's where we were heading into Friday morning. Now, outside of the substance of the argument, it was pretty weird to a lot of folks that it was being had so publicly. As quoted by Axios Senator Tom Tillis said, Why the hell are we having this discussion in public?

0

382.263 - 397.207 Nathaniel Whittemore

Why isn't this occurring in a boardroom or in the secretary's office? I mean, this is sophomoric. So that's where we were heading into Friday morning. In the morning, it seemed like at least OpenAI was lining up alongside their AI peers, or at least, as CNBC put it, trying to help de-escalate the situation.

0

397.968 - 410.365 Nathaniel Whittemore

Late on Thursday night, in a memo to his team, OpenAI CEO Sam Altman said, We've long believed that AI should not be used for mass surveillance or autonomous lethal weapons, and that humans should remain in the loop for high-stakes automated decisions. These are our main red lines.

411.007 - 424.069 Nathaniel Whittemore

In an interview on Friday morning with CNBC, Altman said, For all the differences I have with Anthropic, I mostly trust them as a company, and I think they really do care about safety. And I've been happy that they've been supporting our warfighters. I'm not sure where this is going to go.

424.049 - 440.083 Nathaniel Whittemore

And while a lot of folks on social media were excited that Altman seemed to be lining up alongside Anthropic, OpenAI was clearly having conversations with the DoD at the same time. He indeed said explicitly in that memo that they were exploring whether they could deploy their models in classified environments in a way that, in his words, fit with their principles.

440.063 - 457.243 Nathaniel Whittemore

That was the state of things until 3.47 in the afternoon Eastern Time, when President Trump took to Truth Social to write, in all caps, "...the United States of America will never allow a radical left woke company to dictate how our great military fights and wins wars. That decision belongs to your commander-in-chief and the tremendous leaders I appoint to run our military.

457.744 - 470.28 Nathaniel Whittemore

The left-wing nutjobs at Anthropic have made a disastrous mistake trying to strong-arm the Department of War and force them to obey their terms of service instead of our Constitution." Their selfishness is putting American lives at risk, our troops in danger, and our national security in jeopardy.

Chapter 8: What are the potential consequences for AI companies in the U.S.?

495.523 - 509.944 Nathaniel Whittemore

We will decide the fate of our country, not some out-of-control radical left AI company run by people who have no idea what the real world is all about. Thank you for your attention to this matter. Make America great again. Defense Secretary or Secretary of War or whatever the heck you want to call him at this point, Pete Hegseth chimed in.

0

510.524 - 525.808 Nathaniel Whittemore

This week, Anthropic delivered a masterclass in arrogance and betrayal, as well as a textbook case on how not to do business with the United States government or the Pentagon. Our position is never wavered and will never waver. The Department of War must have full, unrestricted access to Anthropic's models for every lawful purpose in defense of the republic.

0

525.788 - 539.766 Nathaniel Whittemore

Instead, Anthropic and its CEO, Dario Amadei, have chosen duplicity. Cloaked in the sanctimonious rhetoric of effective altruism, they have attempted to strong-arm the United States military into submission, a cowardly act of corporate virtue signaling that places Silicon Valley ideology above American lives.

0

540.287 - 553.204 Nathaniel Whittemore

The terms of service of Anthropic's defective altruism will never outweigh the safety, the readiness, or the lives of American troops on the battlefield. Their true objective is unmistakable, to seize veto power over the operational decisions of the United States military. That is unacceptable.

0

553.765 - 569.667 Nathaniel Whittemore

As President Trump stated on Truth Social, the commander-in-chief and the American people alone will determine the destiny of our armed forces, not unelected tech executives. Anthropic's stance is fundamentally incompatible with American principles. Their relationship with the United States Armed Forces and the federal government has therefore been permanently altered.

569.647 - 586.238 Nathaniel Whittemore

In conjunction with the President's directive for the federal government to cease all use of Anthropic's technology, I am directing the Department of War to designate Anthropic a supply chain risk to national security. Effective immediately, no contractor, supplier, or partner that does business with the United States military may conduct any commercial activity with Anthropic.

586.606 - 599.67 Nathaniel Whittemore

Anthropic will continue to provide the Department of War its services for a period of no more than six months to allow for a seamless transition to a better and more patriotic service. America's warfighters will never be held hostage by the ideological whims of big tech. This decision is final.

600.258 - 615.06 Nathaniel Whittemore

Immediately, the lawyers jumped in to start figuring out what the heck the implications of all this were. Senior research fellow Charlie Bullock wrote, Hegseth claims that this declaration that no Pentagon contractor or supplier can do business with Anthropic is effective immediately, which seems absolutely insane. Under 10 U.S.C.

615.08 - 624.875 Nathaniel Whittemore

3252, which is almost certainly the authority Hegseth has to rely on here, there are multiple requirements that D.O.W. has to fulfill before the SCR declaration becomes effective. They have to complete a risk assessment.

Comments

There are no comments yet.

Please log in to write the first comment.