Menu
Sign In Search Podcasts Charts People & Topics Add Podcast API Blog Pricing

Rob Wiblin

๐Ÿ‘ค Speaker
2780 total appearances

Appearances Over Time

Podcast Appearances

80,000 Hours Podcast
Every AI Company's Safety Plan is 'Use AI to Make AI Safe'. Is That Crazy? | Ajeya Cotra

Because sort of by definition, at that point, AIs are really good at further AI R&D.

80,000 Hours Podcast
Every AI Company's Safety Plan is 'Use AI to Make AI Safe'. Is That Crazy? | Ajeya Cotra

And one of the things we could do with AIs that are good at AI R&D, at least in most cases, is to try and direct their AI R&D towards like,

80,000 Hours Podcast
Every AI Company's Safety Plan is 'Use AI to Make AI Safe'. Is That Crazy? | Ajeya Cotra

filling out the skill profile of AIs and getting them to be good at some of the types of things that we want them to be good at that they aren't so good at right now.

80,000 Hours Podcast
Every AI Company's Safety Plan is 'Use AI to Make AI Safe'. Is That Crazy? | Ajeya Cotra

And so at that point, you might have like just much more capability at your disposal.

80,000 Hours Podcast
Every AI Company's Safety Plan is 'Use AI to Make AI Safe'. Is That Crazy? | Ajeya Cotra

And it might be like much more worth putting in the effort to to try and like fine tune and scaffold and do all these other things to make your AI that's good at moral philosophy or your AI that's good at biodefense.

80,000 Hours Podcast
Every AI Company's Safety Plan is 'Use AI to Make AI Safe'. Is That Crazy? | Ajeya Cotra

Yeah, I mean, just like how right now, you know, 80% plus of our grant money goes to salaries to pay humans to think about stuff and do research and do policy analysis and advocacy and all these other things.

80,000 Hours Podcast
Every AI Company's Safety Plan is 'Use AI to Make AI Safe'. Is That Crazy? | Ajeya Cotra

You know, so too, in a few years, it might be the case that AIs are better than most of our human grantees and our money should mostly be going to buying API credits or renting GPU time to get the AIs to do like a similar distribution of activities.

80,000 Hours Podcast
Every AI Company's Safety Plan is 'Use AI to Make AI Safe'. Is That Crazy? | Ajeya Cotra

Yeah, so I think that the plan I described is compatible with pausing at an intelligence explosion, like right at the brink of an intelligence explosion.

80,000 Hours Podcast
Every AI Company's Safety Plan is 'Use AI to Make AI Safe'. Is That Crazy? | Ajeya Cotra

In fact, I would hope that we do that because I think by default, having 12 months to get everything in order is just not enough time.

80,000 Hours Podcast
Every AI Company's Safety Plan is 'Use AI to Make AI Safe'. Is That Crazy? | Ajeya Cotra

But I think of it as...

80,000 Hours Podcast
Every AI Company's Safety Plan is 'Use AI to Make AI Safe'. Is That Crazy? | Ajeya Cotra

doing two things.

80,000 Hours Podcast
Every AI Company's Safety Plan is 'Use AI to Make AI Safe'. Is That Crazy? | Ajeya Cotra

One is making the pause less binary.

80,000 Hours Podcast
Every AI Company's Safety Plan is 'Use AI to Make AI Safe'. Is That Crazy? | Ajeya Cotra

So if you think of the default path as almost 100% of AI labor goes into further rounds of making AIs better and making more AIs and making more chips and so on.

80,000 Hours Podcast
Every AI Company's Safety Plan is 'Use AI to Make AI Safe'. Is That Crazy? | Ajeya Cotra

And you think of a pause or a stop as 0% of the world's AI labor is going in towards those activities.

80,000 Hours Podcast
Every AI Company's Safety Plan is 'Use AI to Make AI Safe'. Is That Crazy? | Ajeya Cotra

I think there's a whole spectrum between zero and 100%.

80,000 Hours Podcast
Every AI Company's Safety Plan is 'Use AI to Make AI Safe'. Is That Crazy? | Ajeya Cotra

And then I think of it as doing another thing, which is it's sort of answering the question of what you do in the pause, which is like you do all this protective stuff and you have these AIs around to do it with.

80,000 Hours Podcast
Every AI Company's Safety Plan is 'Use AI to Make AI Safe'. Is That Crazy? | Ajeya Cotra

And you might think, like once you have that frame of like making the pause less binary and thinking really hard about what you do during a pause, I think you might often end up thinking, oh, it's worth going a little bit further with AI capabilities because, you know, especially if we tilt the capabilities in a certain direction, we might at the end of that get AIs that are

80,000 Hours Podcast
Every AI Company's Safety Plan is 'Use AI to Make AI Safe'. Is That Crazy? | Ajeya Cotra

much better than they are right now at biodefense, while still not being uncontrollable, still not being that scary.

80,000 Hours Podcast
Every AI Company's Safety Plan is 'Use AI to Make AI Safe'. Is That Crazy? | Ajeya Cotra

And you can imagine a bunch of little pauses and little redirections and so on during that whole period.

80,000 Hours Podcast
Every AI Company's Safety Plan is 'Use AI to Make AI Safe'. Is That Crazy? | Ajeya Cotra

And I would hope that at some point in the period, we do activities like policy coordination and so on that cause us to have longer in this sweet spot of AIs that are powerful enough to help with a lot of stuff, but not so powerful.