Menu
Sign In Search Podcasts Charts People & Topics Add Podcast API Blog Pricing

Ajeya Cotra

๐Ÿ‘ค Speaker
626 total appearances

Appearances Over Time

Podcast Appearances

80,000 Hours Podcast
Every AI Company's Safety Plan is 'Use AI to Make AI Safe'. Is That Crazy? | Ajeya Cotra

If this doesn't work out, why do you think it's most likely to have failed for them?

80,000 Hours Podcast
Every AI Company's Safety Plan is 'Use AI to Make AI Safe'. Is That Crazy? | Ajeya Cotra

unless they have really strong commitments.

80,000 Hours Podcast
Every AI Company's Safety Plan is 'Use AI to Make AI Safe'. Is That Crazy? | Ajeya Cotra

But I guess other mechanisms would be that it's legally required.

80,000 Hours Podcast
Every AI Company's Safety Plan is 'Use AI to Make AI Safe'. Is That Crazy? | Ajeya Cotra

At this point, the government basically insists that most of the compute go towards this, or at least like most of it's not going towards a recursive self-improvement.

80,000 Hours Podcast
Every AI Company's Safety Plan is 'Use AI to Make AI Safe'. Is That Crazy? | Ajeya Cotra

Or I guess if the companies could reach some sort of agreement where they're saying, well, we would all like to spend more of our compute on this kind of thing.

80,000 Hours Podcast
Every AI Company's Safety Plan is 'Use AI to Make AI Safe'. Is That Crazy? | Ajeya Cotra

So we're gonna have some, I guess, contract where we're gonna spend like 50%

80,000 Hours Podcast
Every AI Company's Safety Plan is 'Use AI to Make AI Safe'. Is That Crazy? | Ajeya Cotra

all of our compute, and then we don't lose relative position in particular.

80,000 Hours Podcast
Every AI Company's Safety Plan is 'Use AI to Make AI Safe'. Is That Crazy? | Ajeya Cotra

It might be a little illegal, but yeah, maybe we could carve out an exception to antitrust with this one.

80,000 Hours Podcast
Every AI Company's Safety Plan is 'Use AI to Make AI Safe'. Is That Crazy? | Ajeya Cotra

I guess a different mechanism, in as much as the government is taking a massive interest, they could help to try to coordinate this one way or another.

80,000 Hours Podcast
Every AI Company's Safety Plan is 'Use AI to Make AI Safe'. Is That Crazy? | Ajeya Cotra

I thought that you might say that the most likely reason for this to fail was that it just turned out that alignment is incredibly hard.

80,000 Hours Podcast
Every AI Company's Safety Plan is 'Use AI to Make AI Safe'. Is That Crazy? | Ajeya Cotra

You get egregious misalignment even at relatively low levels of intelligence, and we don't really figure out how to fix that early enough to get useful work out of them.

80,000 Hours Podcast
Every AI Company's Safety Plan is 'Use AI to Make AI Safe'. Is That Crazy? | Ajeya Cotra

I guess another way that they could end up actually just not making that much of an effort is if the window is relatively brief and it just takes a long time to get projects off the ground.

80,000 Hours Podcast
Every AI Company's Safety Plan is 'Use AI to Make AI Safe'. Is That Crazy? | Ajeya Cotra

And they haven't really planned this ahead.

80,000 Hours Podcast
Every AI Company's Safety Plan is 'Use AI to Make AI Safe'. Is That Crazy? | Ajeya Cotra

So, you know, they end up debating it back and forth.

80,000 Hours Podcast
Every AI Company's Safety Plan is 'Use AI to Make AI Safe'. Is That Crazy? | Ajeya Cotra

And then by the time they've figured out that they actually do want to do this, I mean, I suppose it's like nominally in these various papers, but I wonder whether they actually are thinking ahead about how this would feel and whether they'll have the decision-making capability to decide to redirect enormous resources towards this other effort.

80,000 Hours Podcast
Every AI Company's Safety Plan is 'Use AI to Make AI Safe'. Is That Crazy? | Ajeya Cotra

Okay, so that's the AI companies who I guess we're imagining would mostly be focused on this strategy for AI technical alignment.

80,000 Hours Podcast
Every AI Company's Safety Plan is 'Use AI to Make AI Safe'. Is That Crazy? | Ajeya Cotra

But you've been thinking about this more in the context of open philanthropy and what niche it could fill.

80,000 Hours Podcast
Every AI Company's Safety Plan is 'Use AI to Make AI Safe'. Is That Crazy? | Ajeya Cotra

What would open philanthropy need to do if this was dumping billions of dollars onto this plan but became its mainline strategy?

80,000 Hours Podcast
Every AI Company's Safety Plan is 'Use AI to Make AI Safe'. Is That Crazy? | Ajeya Cotra

If I think about this kind of psychologically, I could imagine, you know, if I was leading open philanthropy, or I guess I was one of the donors being advised, and we did have these transparency requirements, and we did start getting a sense that an intelligence explosion might be kicking off.

80,000 Hours Podcast
Every AI Company's Safety Plan is 'Use AI to Make AI Safe'. Is That Crazy? | Ajeya Cotra

I could imagine dithering for a long time, rather than deciding to commit billions of dollars towards this, because there's only a particular amount of money, there's only a particular size of endowment.