Menu
Sign In Search Podcasts Charts People & Topics Add Podcast API Blog Pricing

Ajeya Cotra

๐Ÿ‘ค Speaker
626 total appearances

Appearances Over Time

Podcast Appearances

80,000 Hours Podcast
Every AI Company's Safety Plan is 'Use AI to Make AI Safe'. Is That Crazy? | Ajeya Cotra

I guess another worry would be that the AI models end up being able to cause trouble before they end up being capable enough to figure out solutions.

80,000 Hours Podcast
Every AI Company's Safety Plan is 'Use AI to Make AI Safe'. Is That Crazy? | Ajeya Cotra

A classic case there would be, imagine that we put a lot of effort into, I guess it would be a bit stupid to do this, but we put a lot of effort into training an AI model that's extremely good at developing new viruses or new bacteria, basically changing diseases to make them worse.

80,000 Hours Podcast
Every AI Company's Safety Plan is 'Use AI to Make AI Safe'. Is That Crazy? | Ajeya Cotra

I mean, there are people who are using AI to develop new viruses.

80,000 Hours Podcast
Every AI Company's Safety Plan is 'Use AI to Make AI Safe'. Is That Crazy? | Ajeya Cotra

I guess they're using it to develop vaccines.

80,000 Hours Podcast
Every AI Company's Safety Plan is 'Use AI to Make AI Safe'. Is That Crazy? | Ajeya Cotra

medical treatments, but that sort of stuff can then be repurposed for other things.

80,000 Hours Podcast
Every AI Company's Safety Plan is 'Use AI to Make AI Safe'. Is That Crazy? | Ajeya Cotra

But if that sort of highly specialized model arrives first before you end up with a model that has a sufficient understanding of all the society and biology and medicine to figure out what the good countermeasures are, then we'll need a different approach than this one.

80,000 Hours Podcast
Every AI Company's Safety Plan is 'Use AI to Make AI Safe'. Is That Crazy? | Ajeya Cotra

Yeah, that was going to be another concern of mine that in as much as the AIs are very helpful, you might imagine that they're very helpful at the idea generation or the strategizing stage, but they might still be quite bad at actually running a business or actually figuring out how to do all of the manufacturing.

80,000 Hours Podcast
Every AI Company's Safety Plan is 'Use AI to Make AI Safe'. Is That Crazy? | Ajeya Cotra

So if they could come up with a great strategy for countervailing new bio weapons where they're like, here's the widget that you should use.

80,000 Hours Podcast
Every AI Company's Safety Plan is 'Use AI to Make AI Safe'. Is That Crazy? | Ajeya Cotra

Go and make 10 billion of them.

80,000 Hours Podcast
Every AI Company's Safety Plan is 'Use AI to Make AI Safe'. Is That Crazy? | Ajeya Cotra

They're like, can you help us with that?

80,000 Hours Podcast
Every AI Company's Safety Plan is 'Use AI to Make AI Safe'. Is That Crazy? | Ajeya Cotra

It's like, no, I'm not very good at that.

80,000 Hours Podcast
Every AI Company's Safety Plan is 'Use AI to Make AI Safe'. Is That Crazy? | Ajeya Cotra

Good luck.

80,000 Hours Podcast
Every AI Company's Safety Plan is 'Use AI to Make AI Safe'. Is That Crazy? | Ajeya Cotra

run the team of like thousands of humans and robots that are like actually executing on the plan why is the crunch time aspect or you know the intelligence explosion taking off actually even relevant to when we would want to start doing this because you might just think if ai can help us do research or do work to solve any of these problems then we as soon as it's able to do that we want to do it like whether or not an intelligence explosion is is kicking off or not

80,000 Hours Podcast
Every AI Company's Safety Plan is 'Use AI to Make AI Safe'. Is That Crazy? | Ajeya Cotra

So you're thinking about this strategy not just as a description, I guess, what other organizations potentially should work on or as a description of what the AI companies are already planning to do, but also, I guess, because you think maybe this should influence what open philanthropy plans to do over a couple of years and potentially that open philanthropy's best play might be to have billions of dollars waiting at this relevant crunch time and then disperse them incredibly quickly buying a whole lot of compute to get AIs to solve these problems.

80,000 Hours Podcast
Every AI Company's Safety Plan is 'Use AI to Make AI Safe'. Is That Crazy? | Ajeya Cotra

So an alternative approach to this would be that at the point that we get a heads up that we think an intelligence explosion is beginning to take place, we do everything we can to pause at that stage, to slow down, basically to arrest that process, so that rather than having to rush in three or six months, get the AIs to fix all of these issues, we buy ourselves a bunch more time.

80,000 Hours Podcast
Every AI Company's Safety Plan is 'Use AI to Make AI Safe'. Is That Crazy? | Ajeya Cotra

Why not adopt that as the primary approach instead?

80,000 Hours Podcast
Every AI Company's Safety Plan is 'Use AI to Make AI Safe'. Is That Crazy? | Ajeya Cotra

So yeah, we should probably clarify that, although you think this is among our best bets, in an ideal world, do you think that we would go substantially slower through all of this?

80,000 Hours Podcast
Every AI Company's Safety Plan is 'Use AI to Make AI Safe'. Is That Crazy? | Ajeya Cotra

Because, you know, as good a plan as this might be, we'll really be white knuckling it and not be confident that it's necessarily going to work.

80,000 Hours Podcast
Every AI Company's Safety Plan is 'Use AI to Make AI Safe'. Is That Crazy? | Ajeya Cotra

Yeah, because in as much as we're slowing down to do something, this is a big part of the thing that we're slowing down to do.

80,000 Hours Podcast
Every AI Company's Safety Plan is 'Use AI to Make AI Safe'. Is That Crazy? | Ajeya Cotra

So this is a big part of the company's plan for technical alignment.