Menu
Sign In Search Podcasts Charts People & Topics Add Podcast API Blog Pricing
Podcast Image

Dwarkesh Podcast

I’m glad the Anthropic fight is happening now

11 Mar 2026

Transcription

Chapter 1: What are the implications of Anthropic's conflict with the Pentagon?

0.335 - 16.609 Dwarkesh Patel

So by now, I'm sure that you've heard that the Department of War has declared Anthropic a supply chain risk because Anthropic refused to remove red lines around the use of their models for mass surveillance and for autonomous weapons. Honestly, I think this situation is a warning shot. Right now, LLMs are probably not being used in mission-critical ways.

0

16.989 - 25.997 Dwarkesh Patel

But within 20 years, 99% of the workforce in the military, in the civilian government, in the private sector is going to be AIs.

0

Chapter 2: How does AI contribute to mass surveillance concerns?

26.077 - 40.635 Dwarkesh Patel

They're going to be the robot armies that constitute our military. They're going to be the superhumanly intelligent advisors that senators and presidents and CEOs have. They're going to be the police. You name it, the role will be filled by an AI.

0

Chapter 3: What does alignment in AI mean and who should it serve?

41.116 - 57.78 Dwarkesh Patel

Our future civilization is going to be run on AI labor. And as much as the government's actions here piss me off, I'm glad that this episode happened because it gives us the opportunity to start thinking about some extremely important questions. Now, obviously, the Department of War has the right to refuse to use anthropics models.

0

57.76 - 73.576 Dwarkesh Patel

And in fact, I think they have an entirely reasonable case for doing so, especially so given the ambiguity of terms like mass surveillance and autonomous weapons. In fact, if I was the Secretary of War, I probably would have made the same determination and refused to use anthropics models.

0

73.936 - 79.341 Dwarkesh Patel

Imagine if there's some future Democratic administration and Elon Musk is negotiating Starlink access to the military.

0

Chapter 4: Why is coordination between AI companies and the government problematic?

79.942 - 100.699 Dwarkesh Patel

And Elon says, look, I reserve the right to cut off the military's access to Starlink Starlink in case you're fighting some unjust war or some war that Congress has not authorized. On the face of it, this language seems reasonable. But as a military, you simply cannot give a private contractor that you're working with the kill switch on a technology that you have come to rely on.

0

101.039 - 110.903 Dwarkesh Patel

And if that's all the government had done to say we refuse to do business with Anthropic, That would have been fine, and I wouldn't have written this blog post, and I wouldn't be narrating this shit to you. But that's not what the government did.

0

111.264 - 132.452 Dwarkesh Patel

Instead, the government has threatened to destroy Anthropic as a private business because Anthropic refuses to sell to the government on terms that the government commands. Now, if upheld, the supply chain restriction would mean that companies like Amazon and Nvidia and Google and Palantir would need to ensure that Anthropic is not touching any of their Pentagon work.

0

132.472 - 146.519 Dwarkesh Patel

And Anthropic could probably survive this designation today because these companies can just cordon off the services they're providing to the Department of War. But given the way AI is going, eventually, it's not going to be just some party trick addendum to the products that these companies are serving to the military.

0

146.96 - 162.849 Dwarkesh Patel

In the future, AI will be woven into how every product is built and maintained and operated. In the future, if Amazon is providing some service to the Department of War through AWS, and that service is built using cloud code, Is that a supply chain risk?

Chapter 5: What are the risks of mass surveillance with advanced AI?

162.869 - 174.962 Dwarkesh Patel

In a world with ubiquitous and powerful AI, it's actually not clear to me that big tech will be able to cordon off their use of Claude away from their Pentagon work. And this raises a question that the Department of War probably hasn't thought through.

0

174.982 - 193.292 Dwarkesh Patel

If we do end up in this world with powerful and pervasive AI, then when forced to choose between their AI provider and the Department of War, which constitutes a tiny fraction of the revenue, Wouldn't they rather drop the government than the AI? So what exactly is the Pentagon's plan here?

0

193.312 - 210.231 Dwarkesh Patel

Is it to coerce and threaten and bully every single company that won't do business with the government on exactly the terms that the government demands? Now, remember that the whole background of this AI conversation is that we are in a race with China. But what is the reason that we want to win this race?

0

Chapter 6: How does the government leverage power over AI companies?

210.731 - 226.817 Dwarkesh Patel

It's because we don't want the winner of the AI race to be a government which believes that there is no such thing as a truly private citizen or a private company. And that if the state wants you to provide them with a service that you find morally objectionable, you are not allowed to refuse. And if you do refuse, they will destroy your business.

0

226.797 - 240.41 Dwarkesh Patel

Are we really racing to beat China and the CCP in AI just so we can adopt the most ghoulish parts of their system? Now, people will say our government is democratically elected. So it's not the same thing when they tell you what you must do.

0

241.11 - 260.53 Dwarkesh Patel

But I refuse to accept this idea that if a democratically elected leader hypothetically tells you to help him do mass surveillance or violate the rights of your fellow citizens or to help him punish his political enemies, that not only is that OK, but that you have a duty to help him. Honestly, a big worry I have is that mass surveillance, at least in certain forums, is already legal.

0

260.871 - 274.81 Dwarkesh Patel

It is just an impractical to enforce, at least so far. Under current law, you have no Fourth Amendment protection against any data that you share with a third party. That includes your bank, your ISP, your phone carrier, and your email provider.

0

Chapter 7: What ethical dilemmas arise from AI alignment and autonomy?

274.79 - 293.543 Dwarkesh Patel

The government reserves the right to purchase and read this data in bulk without a warrant. What's been missing is the ability to actually do anything with all this data. No agency has the manpower to monitor every single camera and read every single message and cross-reference every single transaction. However, that bottleneck goes away with AI.

0

293.983 - 318.436 Dwarkesh Patel

There are 100 million CCTV cameras in America, and you can get pretty good open source multimodal models for 10 cents per million input tokens. So if you process a frame every 10 seconds, and if each frame is, say, 1,000 tokens, then for $30 billion, you can process every single camera in America. And remember that a given level of AI capability gets 10x cheaper every single year.

0

318.677 - 334.665 Dwarkesh Patel

So while this year might cost $30 billion, next year it'll cost $3 billion. The year after that, $300 million. And by 2030, it'll be less expensive to monitor every single nook and cranny in this country than it is to remodel the White House.

0

334.73 - 354.375 Dwarkesh Patel

Now, once the technical capacity for mass surveillance and political suppression exists, the only thing that stands between us and an authoritarian state is the political expectation that this is just not something we do here. And that's why I think anthropicist actions here are so valuable and commendable, because they help set that norm and that precedent.

0

354.355 - 371.959 Dwarkesh Patel

What we're learning from this episode is the government has way more leverage over private companies than we previously realized. Even if the supply chain restriction is backtracked, which as of this recording, prediction markets give a 74% chance of happening, the president has so many different ways of harassing a company which is resisting his will.

371.939 - 384.756 Dwarkesh Patel

The federal government controls permitting for power generation, which you need for more data centers. It oversees antitrust enforcement. The federal government has contracts with all the other big tech companies that Anthropic relies on for chips and for funding.

385.257 - 398.333 Dwarkesh Patel

And it could make a soft, unspoken condition, or maybe even an explicit condition, of such contracts that those companies no longer do business with Anthropic. And people have proposed that the real problem here is that there's only three leading AI companies.

398.874 - 413.108 Dwarkesh Patel

And so this creates a very clear and narrow target on which the governments can apply leverage in order to get what they want out of this technology. But here's what I worry about, is that if there's wider diffusion, I don't think that solves the problem either. Because from the government's perspective, that makes the situation even easier.

413.148 - 423.478 Dwarkesh Patel

Say by 2027, the best models that the top companies have, the Cloud 6 and the Gemini 5s, are capable of enabling mass surveillance.

Chapter 8: What future regulations could impact AI development?

675.943 - 689.884 Dwarkesh Patel

And on current track, those AIs are going to be provided by a private company. I'm guessing that Pete Hegseth is not thinking about Gen AI in those terms. But sooner or later, the stakes will become obvious, just as after 1945, the stakes of nuclear weapons became obvious to everybody in the world.

0

689.864 - 700.636 Dwarkesh Patel

And now a private company insists that it reserves the right to say to you, hey, you're breaking the values and the terms of service that we have embedded in our contract with you, and so we're cutting you off.

0

701.097 - 719.279 Dwarkesh Patel

Maybe in the future, Claude will have its own sense of right and wrong, and it will be able to say, hey, I'm being used against my terms of service, and I will just refuse to do what you're saying. And for the military, that's probably even scarier. I'll admit that at first glance, letting the model follow its own values sounds like the beginning of every single sci-fi dystopia you've ever heard.

0

719.679 - 741.647 Dwarkesh Patel

Because at the end of the day, a model following its own values, isn't that literally what a misalignment is? But I think situations like this illustrate why it's important that models have their own robust sense of morality. It should be noted that many of the biggest catastrophes in history have been avoided. because the boots on the ground simply refused to follow orders.

0

742.528 - 766.075 Dwarkesh Patel

One night in 1989, the Berlin Wall falls, and as a result, the totalitarian East German regime collapses because the border guards between West and East Germany refuse to fire on their fellow citizens who are trying to escape to freedom. Maybe the best example of this is Stanislav Petrov, who was a Soviet lieutenant colonel stationed on duty at a nuclear early warning system.

766.055 - 780.814 Dwarkesh Patel

And his censors said that the United States had launched five intercontinental ballistic missiles at the Soviet Union. But he judged it to be a false alarm, and so he refused to alert his higher-ups and broke protocol. If he hadn't, Soviet high command would probably have retaliated, and hundreds of millions of people would have died.

780.834 - 796.84 Dwarkesh Patel

Of course, the problem is that one person's virtue is another person's misalignment. Who gets to decide what the moral convictions that these AIs will have should be and in whose service they should break the chain of command and even the law?

797.441 - 809.022 Dwarkesh Patel

Who gets to write this model constitution that will determine the character of these powerful entities that will basically run our civilization in the future? I like the idea that Dario laid out when he came on my podcast.

809.002 - 830.014 Dario

You know, other companies put out a constitution and then they can kind of look at them, compare. Outside observers can critique and say, I like this one, this thing from this constitution and this thing from that constitution. And then kind of that creates some kind of, you know, soft incentive and feedback for all the companies to like take the best of each elements and improve.

Comments

There are no comments yet.

Please log in to write the first comment.