Menu
Sign In Search Podcasts Charts People & Topics Add Podcast API Blog Pricing

Ajeya Cotra

๐Ÿ‘ค Speaker
626 total appearances

Appearances Over Time

Podcast Appearances

80,000 Hours Podcast
Every AI Company's Safety Plan is 'Use AI to Make AI Safe'. Is That Crazy? | Ajeya Cotra

that doesn't want to help you.

80,000 Hours Podcast
Every AI Company's Safety Plan is 'Use AI to Make AI Safe'. Is That Crazy? | Ajeya Cotra

But it's a lot easier to see how you potentially solve problems other than alignment.

80,000 Hours Podcast
Every AI Company's Safety Plan is 'Use AI to Make AI Safe'. Is That Crazy? | Ajeya Cotra

Like if you assume, well, the alignment part, we feel like we've got a good handle on.

80,000 Hours Podcast
Every AI Company's Safety Plan is 'Use AI to Make AI Safe'. Is That Crazy? | Ajeya Cotra

But there's a huge list of other problems that are being created during the intelligence explosion, like the fact that AI now, if people get access to it, could invent other kinds of destructive technologies that we don't yet have good countermeasures for.

80,000 Hours Podcast
Every AI Company's Safety Plan is 'Use AI to Make AI Safe'. Is That Crazy? | Ajeya Cotra

In that case, it's just clear how well the AI could just help you figure out what the countermeasures ought to be.

80,000 Hours Podcast
Every AI Company's Safety Plan is 'Use AI to Make AI Safe'. Is That Crazy? | Ajeya Cotra

That makes sense.

80,000 Hours Podcast
Every AI Company's Safety Plan is 'Use AI to Make AI Safe'. Is That Crazy? | Ajeya Cotra

I think the distinction I was drawing is for people who thought that the alignment problem was extremely hard to solve and we were like way off track to solving it, the idea of getting the AI to solve the problem is kind of self-contradictory because like, well, I wouldn't believe, I wouldn't trust the AI at all.

80,000 Hours Podcast
Every AI Company's Safety Plan is 'Use AI to Make AI Safe'. Is That Crazy? | Ajeya Cotra

Anything that it proposed, I would assume was sabotaging us.

80,000 Hours Podcast
Every AI Company's Safety Plan is 'Use AI to Make AI Safe'. Is That Crazy? | Ajeya Cotra

If you're on the side of thinking, well, the alignment problem is actually the easier part of things, I think that that's a relatively straightforward technical problem that we are on track to solve.

80,000 Hours Podcast
Every AI Company's Safety Plan is 'Use AI to Make AI Safe'. Is That Crazy? | Ajeya Cotra

But there's this laundry list of 10 other issues.

80,000 Hours Podcast
Every AI Company's Safety Plan is 'Use AI to Make AI Safe'. Is That Crazy? | Ajeya Cotra

It's then very obvious, but we'll have the brilliant AGI, so why don't we just use that to solve all the other things?

80,000 Hours Podcast
Every AI Company's Safety Plan is 'Use AI to Make AI Safe'. Is That Crazy? | Ajeya Cotra

And also, I'm inclined to trust it and believe it.

80,000 Hours Podcast
Every AI Company's Safety Plan is 'Use AI to Make AI Safe'. Is That Crazy? | Ajeya Cotra

So which kind of specific problems arising from the intelligence explosion are you envisaging, wanting to get the AGI to help us out with?

80,000 Hours Podcast
Every AI Company's Safety Plan is 'Use AI to Make AI Safe'. Is That Crazy? | Ajeya Cotra

How do you ensure that advances in AI doesn't lead to a war between the US and China, that kind of thing?

80,000 Hours Podcast
Every AI Company's Safety Plan is 'Use AI to Make AI Safe'. Is That Crazy? | Ajeya Cotra

So I interviewed Will McCaskill and Tom Davidson from Forethought earlier in the year.

80,000 Hours Podcast
Every AI Company's Safety Plan is 'Use AI to Make AI Safe'. Is That Crazy? | Ajeya Cotra

And the organization has a long list of what they call grand challenges, which they suspect all of them are probably amenable to this kind of AGI labor during crunch time.

80,000 Hours Podcast
Every AI Company's Safety Plan is 'Use AI to Make AI Safe'. Is That Crazy? | Ajeya Cotra

I think other ones are like ensuring that society doesn't end up locked into particular values kind of prematurely and like cuts off our ability for further reflection and changing our mind.

80,000 Hours Podcast
Every AI Company's Safety Plan is 'Use AI to Make AI Safe'. Is That Crazy? | Ajeya Cotra

The potential use of AI or AGI in as much as it's very steerable and follows instructions to be used in kind of power grabs by the people who are operating it.

80,000 Hours Podcast
Every AI Company's Safety Plan is 'Use AI to Make AI Safe'. Is That Crazy? | Ajeya Cotra

I guess the space governance, this question of if we actually do start to be able to use resources in space, how would we share them?

80,000 Hours Podcast
Every AI Company's Safety Plan is 'Use AI to Make AI Safe'. Is That Crazy? | Ajeya Cotra

How would we divide them such that, in particular, such that there's not conflict ahead of time because people anticipate that once you start grabbing resources in space, you're on track to become overwhelmingly dominant.