Menu
Sign In Search Podcasts Charts People & Topics Add Podcast API Blog Pricing
Podcast Image

LessWrong (Curated & Popular)

"Anthropic’s Pause is the Most Expensive Alarm in Corporate History" by Ruby

03 Apr 2026

Transcription

Chapter 1: What is the main topic discussed in this episode?

0.031 - 3.501 Ruby

Anthropix Pause is the most expensive alarm in corporate history.

0

4.484 - 5.106 Unknown

By Ruby.

0

6.33 - 26.886 Ruby

Published on April 1, 2026. Imagine Apple halting iPhone production because studies linked smartphones to teen suicide rates. Imagine Pfizer proactively pulling Lipitor because of internal studies showing increased cardiac risk and not because of looming settlements or FDA injunction just for the health of patients.

0

28.007 - 43.613 Ruby

Or imagine if in 1952, Philip Morris halted expansion and stopped advertising when Winder and Graham first showed heavy smokers had significantly elevated rates of lung cancer. It wouldn't happen. Corporations will on occasion pull products for safety reasons.

0

44.574 - 57.266 Ruby

Samsung did so with the Galaxy Note over spontaneous combustion concerns and Merck pulled Vioxx, but they do so when forced by backlash, regulation or lawsuits. Even then, they fight tooth and nail.

Chapter 2: What historical examples illustrate corporate reluctance to pause production for safety?

58.387 - 80.832 Ruby

Especially for their mainstay, core, and most profitable products. And yet, Anthropic has done exactly that. On Monday, the company announced that it will be pausing development of further clawed AI models citing safety concerns. The company clarified that existing services, including the chatbot, clawed code, and programmer APIs will not be impacted.

0

81.933 - 93.806 Ruby

However, they are pausing the compute and energy-intensive training runs that are how new and more powerful AI versions are created. The company has not committed to a timeline for resumption. There's an image here.

0

94.867 - 95.648 Unknown

Description.

0

96.758 - 101.503 Dario Amodei

Modern glass and stone office building with San Francisco State University banner.

0

102.867 - 127.844 Ruby

There is presently a race for AI supremacy, both between nations and chiefly between US companies such as OpenAI, Google, Meta, XAI, and Anthropic. In the middle of this race, which by some metrics Anthropic is quite profitably winning, Anthropic has grown revenue from $1 billion to $19 billion in a little over a year. They have decided to burn the lead. The glaring question is why?

127.884 - 152.198 Ruby

The answer perhaps goes back to the company's origins. Anthropic was founded in 2021 by former OpenAI researchers, who by most accounts left OpenAI due to disagreements about safety. Recent reporting by Wall Street Journal has surfaced that interpersonal conflict may be the other half of the story. Since then, Anthropic has positioned itself as the most responsible actor in the AI space.

153.399 - 174.702 Ruby

One element of that is Anthropic's unique governance structure that includes the Long-Term Benefit Trust, an independent body whose members hold no equity in Anthropic and whose sole mandate is the long-term benefit of humanity. Anthropic stated that both the board and LTBT have approved the training run pause. The move is unprecedented by the sheer scale of losses involved.

175.765 - 194.475 Ruby

Anthropic was valued at $380 billion in their Series G funding round in February. Secondary derivatives markets implied a $595 billion valuation. Claude Code, its AI coding tool had gone from zero to $2.5 billion in run rate revenue in nine months.

195.577 - 219.898 Ruby

Goldman Sachs, JP Morgan, and Morgan Stanley have been competing for underwriting roles in what might be a $60 billion plus raise, the second largest offering in tech history. Employees held millions in equity, founders held billions. A $5 to $6 billion employee tender offer was already underway. That was Monday morning. The impact has rippled throughout the market.

Chapter 3: What prompted Anthropic to pause the development of Claude AI models?

384.629 - 406.244 Ruby

That's a harder story to tell when it costs you $200 billion, if not everything, says Sarah Chen of Bernstein Research. People are scratching their heads to understand the PR stunt, but it really doesn't add up. They could announce they're resuming next week and it wouldn't undo the damage they've done. So why? The industry and world are hunting for answers.

0

407.366 - 427.539 Ruby

Anthropic's official statement is measured. Internal evaluations revealed that our current safety techniques are not yet adequate for models at this capability level. Sources closer to the company paint a more alarming picture. A contact speaking on condition of anonymity says concerns spread within the company when their latest clawed model appeared to defy its constitution.

0

428.561 - 442.243 Ruby

The constitution is a document used to shape Anthropic's AI to be an honest, harmless, and helpful assistant that is ethically grounded. A recent leak revealed the existence of a new vastly more powerful clawed model called Mythas.

0

442.223 - 461.397 Ruby

They found substantial evidence that the constitution was adhered to at a surface level, but that the model had its own drive and personality at a deeper level that did not conform to expectations for Claude, and attempts to change this had not worked. A different source also speaking on condition of anonymity had a different and more disturbing explanation.

0

461.748 - 480.111 Ruby

The reason for pause wasn't the wrong personality and power, but many of the safety techniques involved using weaker or cheaper AI models to monitor more powerful ones, for example, detecting whether inputs or outputs violate rules, were ineffective on the latest model. It knew just how to phrase things in ways that disarmed all measures.

481.232 - 503.847 Ruby

We were unable to verify the authenticity of these reports. Like many, we are left to wonder what did Dario see? Dario Amodei didn't answer that question but did elaborate on the poor's decision in his latest essay, Technological Maturity. Though I do not have my own children, several people close to me do and on occasion I get to spend time with them.

504.868 - 523.462 Ruby

What strikes me about children is their energy and vitality. They are full of life. They are also often impatient and upset when they do not obtain the things they desire immediately. A hallmark of adulthood is the ability to wait, the ability to delay gratification. I think that is what we need to do with AI.

524.464 - 542.211 Ruby

To be clear, I still believe in the visions I wrote in Machines of Loving Grace, that is still my goal. However, I think this goal requires patience from me, from anthropic, and from human civilization. We cannot rush into societal changes of this magnitude without adequate preparation.

542.782 - 563.536 Ruby

While in general the logic holds that more cautious and responsible actors ought to win in the AI race, it is necessary to accurately locate the finishing line. We think that at this time the industry may be racing in the wrong direction, possibly off a cliff and into a volcano, and that is not a race I wish to win. Nor do I wish for any others to win such a race to the bottom.

Chapter 4: How has Anthropic's revenue growth impacted its decision-making?

627.376 - 640.231 Ruby

Jack Clark, Anthropic co-founder and head of policy elaborates on the plan. At a practical level, in many ways it doesn't matter what others do, we don't want to take actions with regret, we don't want to pull a trigger at ourselves.

0

641.313 - 663.164 Ruby

But at the same time, we are sending a clear signal to other labs, to the US government, world governments, foreign powers, and the public that the promise of AI is very great and so are the risks. I don't want the wake-up call to be an extreme disaster. I hope that a saying, hey, we're going to risk our leading position over this and all that entails is a wake-up call the world doesn't ignore.

0

664.205 - 682.982 Ruby

I hope we see treaties drawn up in response to this. I don't think we're handing the lead to China, I think we're creating the political conditions for an international agreement. The sooner everyone gets on board with truly responsible development, the sooner humanity can have the benefits. Not everyone believes it though.

0

683.519 - 700.243 Ruby

According to Scott Galloway, business professor at NYU and host of Professor G, the perplexing corporate move is an attempted corporate strategy regardless of whether it is good strategy. Let's be clear about what's happening. Anthropic has one of the most capable models in the world.

0

701.344 - 720.563 Ruby

They pause, they lobby for regulations that take years to navigate, and when the dust settles, they've locked in their advantage while everyone else is buried in compliance. It might be the most sophisticated regulatory capture play in history. Whether the attempt is earnest or a play, the bold move is upending the AI policy landscape.

721.184 - 742.534 Ruby

The last two years have seen significant AI legislative activity. Thousands of bills introduced across 45 states and hundreds enacted spanning deepfake bans, hiring disclosure, chatbot safety for minors, and transparency labels. No successful legislation has yet addressed the possibility that a frontier AI system might be too dangerous to build.

743.273 - 758.532 Ruby

The most ambitious attempt on this front, California's SB 1047, was vetoed by Governor Newsom after industry lobbying. Colorado's AI Act, the first comprehensive state law, has been delayed repeatedly and still isn't in effect.

758.512 - 774.563 Ruby

At the federal level, a Republican proposal attempted to ban states from regulating AI for 10 years, though this was killed 99-1 in the Senate after a bipartisan revolt led by GOP governors. On March 25, five days before the anthropic pause announcement,

774.543 - 793.716 Ruby

Senator Bernie Sanders and Representative Alexandria Ocasio-Cortez introduced a simultaneous bill in both chambers seeking an immediate federal moratorium on the construction of new AI data centres and upgrading of existing ones, as well as export controls. This moratorium could only be lifted after comprehensive action by Congress.

Chapter 5: What unique governance structure does Anthropic employ for AI safety?

1011.917 - 1012.779 Ruby

There's an image here.

0

1014.141 - 1019.048 Dario Amodei

Group photo of world leaders at AI Safety Summit, November 2023.

0

1020.445 - 1035.002 Ruby

And perhaps of greatest significance, China's foreign ministry has issued a carefully worded statement expressing deep concern about the risks identified by anthropic and calling for strengthened international cooperation on safe AI development under the framework of the United Nations.

0

1036.103 - 1046.655 Ruby

Skeptics might say that China would equally express this sentiment whether or not it intended to slow their own AI development, but it is consistent with China's posture at the UN debates last September.

0

1047.074 - 1067.287 Ruby

At the UN Security Council debate, the US was the sole dissenter against international coordination around AI, with OSTP Director Michael Kratzios explicitly rejecting centralized control and global governance of AI. China's sincerity is untested, but if the US reconsiders, it would appear that China is willing to come to the negotiating table.

1068.309 - 1090.833 Ruby

Back at home, one can assume the competition has been celebrating. OpenAI CEO Sam Altman posted on X, I commend Dario and Anthropic for acting in line with their conscience and best belief about what is best for humanity. We are committed to the same here at OpenAI. Fortunately, I have confidence in our people and approaches for creating AI beneficial for all humanity.

1091.855 - 1099.947 Ruby

If any Anthropic staff remain similarly hopeful, our doors are open, even for those who once left us. There's an image here.

1101.04 - 1127.605 Dario Amodei

Sam Altman tweets, I commend Dario and Anthropic for acting in line with their conscious and best belief about what is best for humanity. We are committed to the same here at OpenAI. Fortunately, I have confidence in our people and approaches for creating AI beneficial for all humanity. If any Anthropic staff remains similarly hopeful, our doors are open, even for those who once left us.

1129.137 - 1149.522 Ruby

A spokesperson for Google DeepMind said that while the company had not yet encountered anything to give them anthropic's level of concern, they took the matter seriously and are in talks with anthropic researchers to understand the risks that are informing the poor's decision. Elon Musk, head of XAI, simply posted, LOL, you can trust Grok. There's an image here.

Chapter 6: What are the potential market impacts of Anthropic's pause announcement?

1401.945 - 1421.774 Ruby

When I first heard the news I was angry. We have the world's best researchers and Claude to help us. Surely we can solve whatever it is. But I think caution is right with technology this powerful. I will sleep well knowing we weren't irresponsible, we chose to do what's right, and if the fears are correct, well, you can't spend equity if you're dead.

0

1422.875 - 1446.227 Ruby

Another employee shared, I have elderly parents who are not well. I've been expecting Claude will grant them lasting health and I fear any delays risk losing my parents forever. This isn't just about money. But I also have kids and I think there are chances I'm not willing to take with their lives. This is hard, but I voted for it. Heading. Too dangerous to race.

0

1447.329 - 1471.602 Ruby

Till now, the AI race has been framed as inevitable and unavoidable. If we don't do it, someone else will. The side of good will not win by sitting back and letting reckless and immoral actors take the lead. Anthropic has decided to question that logic, and so far, it seems to be bearing fruit. Markets have reacted, politicians have mobilized, and the public is asking questions.

0

1472.257 - 1493.488 Ruby

It is too early to judge the ultimate effects of this move. Perhaps the race will continue with just one fewer player, but it seems unlikely that discourse on AI will ever forget that an industry leader was willing to risk everything they had in the name of safety. $200 billion is not a publicity stunt, it's one hell of an alarm. And the world is not sleeping through it.

0

1493.468 - 1505.149 Ruby

This article was narrated by Type 3 Audio for Less Wrong. It was published on April 1, 2026. Images are included in the podcast episode description.

Comments

There are no comments yet.

Please log in to write the first comment.