Menu
Sign In Search Podcasts Charts People & Topics Add Podcast API Blog Pricing

Dwarkesh Patel

šŸ‘¤ Speaker
14445 total appearances

Appearances Over Time

Podcast Appearances

Dwarkesh Podcast
I’m glad the Anthropic fight is happening now

The underlying logic for why Anthropic wants these regulations makes sense.

Dwarkesh Podcast
I’m glad the Anthropic fight is happening now

Many of the actions that a lab could take to make AI development safer impose real costs on them.

Dwarkesh Podcast
I’m glad the Anthropic fight is happening now

It could slow them down relative to their competitors.

Dwarkesh Podcast
I’m glad the Anthropic fight is happening now

For example, investing more in

Dwarkesh Podcast
I’m glad the Anthropic fight is happening now

in aligning AI systems rather than just on raw capabilities, enforcing safeguards against using these models to make bioweapons or do cyber attacks, and eventually slowing down the recursive self-improvement loop where AIs are helping design more powerful future systems to a pace where humans can actually stay in the loop rather than just kicking off some kind of uncontrolled singularity.

Dwarkesh Podcast
I’m glad the Anthropic fight is happening now

And these safeguards are meaningless unless the whole industry follows suit, which means that there's a real collective action problem here.

Dwarkesh Podcast
I’m glad the Anthropic fight is happening now

Anthropic has been open about their opinion that they think some sort of extensive and involved regulatory apparatus is needed to control AI.

Dwarkesh Podcast
I’m glad the Anthropic fight is happening now

They wrote in their Frontier Safety Roadmap, quote,

Dwarkesh Podcast
I’m glad the Anthropic fight is happening now

At the most advanced capability levels and risks, the appropriate governance analogy may be closer to nuclear energy or financial regulation than to today's approach to software.

Dwarkesh Podcast
I’m glad the Anthropic fight is happening now

So they're imagining something that looks closer to the Nuclear Regulatory Commission or the Securities and Exchange Commission, but for AI.

Dwarkesh Podcast
I’m glad the Anthropic fight is happening now

Now, I cannot imagine how a regulatory framework built around the kinds of concepts that are used in the AI risk discourse will not be used and abused by a wannabe despot.

Dwarkesh Podcast
I’m glad the Anthropic fight is happening now

The underlying terms here, like catastrophic risk or threats to national security or autonomy risk,

Dwarkesh Podcast
I’m glad the Anthropic fight is happening now

are so vague and so open to interpretation that you're just handing a fully loaded bazooka to a future power-hungry leader.

Dwarkesh Podcast
I’m glad the Anthropic fight is happening now

These terms can mean whatever the government wants them to mean.

Dwarkesh Podcast
I’m glad the Anthropic fight is happening now

Have you built a model that will tell users that the government's policy on tariffs is misguided?

Dwarkesh Podcast
I’m glad the Anthropic fight is happening now

Well, that's a deceptive model.

Dwarkesh Podcast
I’m glad the Anthropic fight is happening now

It's a manipulative model.

Dwarkesh Podcast
I’m glad the Anthropic fight is happening now

You can't deploy it.

Dwarkesh Podcast
I’m glad the Anthropic fight is happening now

Have you built a model that will not assist the government with mass surveillance?

Dwarkesh Podcast
I’m glad the Anthropic fight is happening now

That's a threat to national security.