Menu
Sign In Search Podcasts Charts People & Topics Add Podcast API Pricing
Podcast Image

EA Forum Podcast (All audio)

“Anthropic’s leading researchers acted as moderate accelerationists” by Remmelt

02 Sep 2025

Description

In 2021, a circle of researchers left OpenAI, after a bitter dispute with their executives. They started a competing company, Anthropic, stating that they wanted to put safety first. The safety community responded with broad support. Thought leaders recommended engineers to apply, and allied billionaires invested.[1] Anthropic's focus has shifted – from internal-only research and cautious demos of model safety and capabilities, toward commercialising models for Amazon and the military. Despite the shift, 80,000 Hours continues to recommend talented engineers to join Anthropic. On the LessWrong forum, many authors continue to support safety work at Anthropic, but I also see side-conversations where people raise concerns about premature model releases and policy overreaches. So, a bunch of seemingly conflicting opinions about work by different Anthropic staff, and no overview. But the bigger problem is that we are not evaluating Anthropic on its original justification for existence. Did early researchers put [...] ---Outline:(04:47) 1. Scaled GPT before founding Anthropic(19:02) Rationale #1: 'AI progress is inevitable'(24:09) Rationale #2: 'we scale first so we can make it safe'(30:01) Rationale #3: 'we reduce the hardware overhang now to prevent disruption later'(32:39) 2. Founded an AGI development company and started competing on capabilities(39:12) Early commitments(41:09) Degrading commitments(44:52) Declining safety governance(46:27) 3. Lobbied for policies that minimised Anthropic's accountability for safety(47:14) Minimal 'Responsible Scaling Policies'(59:20) Lobbied against provisions in SB 1047(01:02:25) 4. Built ties with AI weapons contractors and the US military(01:03:50) Anthropics intel-defence partnership(01:06:20) Anthropics earlier ties(01:07:40) 5. Promoted band-aid fixes to speculative risks over existing dangers that are costly to address(01:11:50) Cheap fixes for risks that are still speculative(01:14:02) Example of an existing problem that is costly to address(01:17:20) Conclusion--- First published: September 1st, 2025 Source: https://forum.effectivealtruism.org/posts/izGaTX3E7tdTa29a5/anthropic-s-leading-researchers-acted-as-moderate --- Narrated by TYPE III AUDIO. ---Images from the article:Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.

Audio
Featured in this Episode

No persons identified in this episode.

Transcription

This episode hasn't been transcribed yet

Help us prioritize this episode for transcription by upvoting it.

0 upvotes
🗳️ Sign in to Upvote

Popular episodes get transcribed faster

Comments

There are no comments yet.

Please log in to write the first comment.