Menu
Sign In Search Podcasts Charts People & Topics Add Podcast API Blog Pricing

Rob Wiblin

๐Ÿ‘ค Speaker
1910 total appearances

Appearances Over Time

Podcast Appearances

80,000 Hours Podcast
Every AI Company's Safety Plan is 'Use AI to Make AI Safe'. Is That Crazy? | Ajeya Cotra

doing two things.

80,000 Hours Podcast
Every AI Company's Safety Plan is 'Use AI to Make AI Safe'. Is That Crazy? | Ajeya Cotra

One is making the pause less binary.

80,000 Hours Podcast
Every AI Company's Safety Plan is 'Use AI to Make AI Safe'. Is That Crazy? | Ajeya Cotra

So if you think of the default path as almost 100% of AI labor goes into further rounds of making AIs better and making more AIs and making more chips and so on.

80,000 Hours Podcast
Every AI Company's Safety Plan is 'Use AI to Make AI Safe'. Is That Crazy? | Ajeya Cotra

And you think of a pause or a stop as 0% of the world's AI labor is going in towards those activities.

80,000 Hours Podcast
Every AI Company's Safety Plan is 'Use AI to Make AI Safe'. Is That Crazy? | Ajeya Cotra

I think there's a whole spectrum between zero and 100%.

80,000 Hours Podcast
Every AI Company's Safety Plan is 'Use AI to Make AI Safe'. Is That Crazy? | Ajeya Cotra

And then I think of it as doing another thing, which is it's sort of answering the question of what you do in the pause, which is like you do all this protective stuff and you have these AIs around to do it with.

80,000 Hours Podcast
Every AI Company's Safety Plan is 'Use AI to Make AI Safe'. Is That Crazy? | Ajeya Cotra

And you might think, like once you have that frame of like making the pause less binary and thinking really hard about what you do during a pause, I think you might often end up thinking, oh, it's worth going a little bit further with AI capabilities because, you know, especially if we tilt the capabilities in a certain direction, we might at the end of that get AIs that are

80,000 Hours Podcast
Every AI Company's Safety Plan is 'Use AI to Make AI Safe'. Is That Crazy? | Ajeya Cotra

much better than they are right now at biodefense, while still not being uncontrollable, still not being that scary.

80,000 Hours Podcast
Every AI Company's Safety Plan is 'Use AI to Make AI Safe'. Is That Crazy? | Ajeya Cotra

And you can imagine a bunch of little pauses and little redirections and so on during that whole period.

80,000 Hours Podcast
Every AI Company's Safety Plan is 'Use AI to Make AI Safe'. Is That Crazy? | Ajeya Cotra

And I would hope that at some point in the period, we do activities like policy coordination and so on that cause us to have longer in this sweet spot of AIs that are powerful enough to help with a lot of stuff, but not so powerful.

80,000 Hours Podcast
Every AI Company's Safety Plan is 'Use AI to Make AI Safe'. Is That Crazy? | Ajeya Cotra

They're like...

80,000 Hours Podcast
Every AI Company's Safety Plan is 'Use AI to Make AI Safe'. Is That Crazy? | Ajeya Cotra

you know, we've already lost the game.

80,000 Hours Podcast
Every AI Company's Safety Plan is 'Use AI to Make AI Safe'. Is That Crazy? | Ajeya Cotra

Yeah, so I think that if a really clear early warning sign triggers that we are about to enter into this intelligence explosion, fast takeoff space where we go in the space of 12 months from AI, R&D, automation to vastly superhuman AI, then I would vote for, at that time,

80,000 Hours Podcast
Every AI Company's Safety Plan is 'Use AI to Make AI Safe'. Is That Crazy? | Ajeya Cotra

shifting that trajectory to be 10 times longer or even longer than that and trying to make that transition as a society in 10 years instead of one year or 20 years instead of one year, I still wouldn't, and this is maybe a bit of a quibble, I still wouldn't advocate for pausing and then like hanging out for 10 years and then unpausing because I actually think that like slowly inching our way up is better than like pause then unpause and then having a jump.

80,000 Hours Podcast
Every AI Company's Safety Plan is 'Use AI to Make AI Safe'. Is That Crazy? | Ajeya Cotra

But yeah, I would like going back to what we said about like how your default expectations of trajectories influence what you think should happen.

80,000 Hours Podcast
Every AI Company's Safety Plan is 'Use AI to Make AI Safe'. Is That Crazy? | Ajeya Cotra

I think the default is going through this in like one year, and I would certainly rather it be 10 or 15 or 20 years.

80,000 Hours Podcast
Every AI Company's Safety Plan is 'Use AI to Make AI Safe'. Is That Crazy? | Ajeya Cotra

But I think that the frame of using AIs to solve our problems applies regardless of whether you're sort of white-knuckling it in one year or maybe eking out an extra two months, or if you manage to get the consensus and the common knowledge that allows the world to step through it in 10 years.

80,000 Hours Podcast
Every AI Company's Safety Plan is 'Use AI to Make AI Safe'. Is That Crazy? | Ajeya Cotra

I think that if it fails, it's probably most likely to fail because they just didn't actually do a big redirection from using AIs for further AI capabilities to putting a lot of energy towards using them for AI safety.

80,000 Hours Podcast
Every AI Company's Safety Plan is 'Use AI to Make AI Safe'. Is That Crazy? | Ajeya Cotra

Because they say this is their plan, but they don't really have any...

80,000 Hours Podcast
Every AI Company's Safety Plan is 'Use AI to Make AI Safe'. Is That Crazy? | Ajeya Cotra

quantitative claims about like at that stage, what fraction of their AI labor or their human labor for that matter is going to go towards the safety versus the further acceleration.