Menu
Sign In Search Podcasts Charts People & Topics Add Podcast API Blog Pricing

Rob Wiblin

๐Ÿ‘ค Speaker
1910 total appearances

Appearances Over Time

Podcast Appearances

80,000 Hours Podcast
Every AI Company's Safety Plan is 'Use AI to Make AI Safe'. Is That Crazy? | Ajeya Cotra

No one really wants the sort of destruction that comes from everybody racing as hard as possible to get there first.

80,000 Hours Podcast
Every AI Company's Safety Plan is 'Use AI to Make AI Safe'. Is That Crazy? | Ajeya Cotra

But there's like a complicated space of like negotiated options beyond that.

80,000 Hours Podcast
Every AI Company's Safety Plan is 'Use AI to Make AI Safe'. Is That Crazy? | Ajeya Cotra

And I think AIs could potentially help a lot with that sort of thing.

80,000 Hours Podcast
Every AI Company's Safety Plan is 'Use AI to Make AI Safe'. Is That Crazy? | Ajeya Cotra

Yeah, I would think so.

80,000 Hours Podcast
Every AI Company's Safety Plan is 'Use AI to Make AI Safe'. Is That Crazy? | Ajeya Cotra

I think if you look at public communications from at least OpenAI, Anthropic, and Google DeepMind, this sort of jumps out more or less in these different cases.

80,000 Hours Podcast
Every AI Company's Safety Plan is 'Use AI to Make AI Safe'. Is That Crazy? | Ajeya Cotra

But

80,000 Hours Podcast
Every AI Company's Safety Plan is 'Use AI to Make AI Safe'. Is That Crazy? | Ajeya Cotra

in all of their stated safety plans, you see this element of as AIs get better and better, they're going to incorporate the AIs themselves into their safety plans more and more.

80,000 Hours Podcast
Every AI Company's Safety Plan is 'Use AI to Make AI Safe'. Is That Crazy? | Ajeya Cotra

And I think some are more explicit than others about expecting some sort of specific crunch time that occurs when AI is rapidly accelerating AI R&D.

80,000 Hours Podcast
Every AI Company's Safety Plan is 'Use AI to Make AI Safe'. Is That Crazy? | Ajeya Cotra

But everybody is picturing AIs playing a heavy role in the safety of future AIs.

80,000 Hours Podcast
Every AI Company's Safety Plan is 'Use AI to Make AI Safe'. Is That Crazy? | Ajeya Cotra

Yeah, I think fundamentally,

80,000 Hours Podcast
Every AI Company's Safety Plan is 'Use AI to Make AI Safe'. Is That Crazy? | Ajeya Cotra

you need it to be the case that there exists a window of opportunity where before AIs are uncontrollably powerful or have created unacceptable levels of risk, where they are really capable and really change the game for AI safety research, and that there's some meaningful window of time where you can notice as you're approaching it and

80,000 Hours Podcast
Every AI Company's Safety Plan is 'Use AI to Make AI Safe'. Is That Crazy? | Ajeya Cotra

Even by default, without like crazy slowdown, it lasts at least six months or lasts a year.

80,000 Hours Podcast
Every AI Company's Safety Plan is 'Use AI to Make AI Safe'. Is That Crazy? | Ajeya Cotra

If you think instead that once your AI sort of hits upon some generality threshold, it like within a matter of days or weeks becomes crazy super intelligent, this plan doesn't work because like, you know, you wouldn't even notice probably before it's too late.

80,000 Hours Podcast
Every AI Company's Safety Plan is 'Use AI to Make AI Safe'. Is That Crazy? | Ajeya Cotra

So, and then I think there's also, there can also be unlucky orderings of capabilities where this plan wouldn't work.

80,000 Hours Podcast
Every AI Company's Safety Plan is 'Use AI to Make AI Safe'. Is That Crazy? | Ajeya Cotra

Where you could have AIs that are like really specifically good at AI R&D, and they're really not good at anything else.

80,000 Hours Podcast
Every AI Company's Safety Plan is 'Use AI to Make AI Safe'. Is That Crazy? | Ajeya Cotra

Not even AI safety research that's very similar to AI R&D.

80,000 Hours Podcast
Every AI Company's Safety Plan is 'Use AI to Make AI Safe'. Is That Crazy? | Ajeya Cotra

They're just like extremely good at AI R&D.

80,000 Hours Podcast
Every AI Company's Safety Plan is 'Use AI to Make AI Safe'. Is That Crazy? | Ajeya Cotra

Maybe the only thing they're good at is making it so that future generations of AIs have better sample efficiency and can learn new things more efficiently.

80,000 Hours Podcast
Every AI Company's Safety Plan is 'Use AI to Make AI Safe'. Is That Crazy? | Ajeya Cotra

Then you could have a period of six months or a year where you know this is happening and you have these AIs, but you're still sort of hurtling towards a highly general superintelligence without being able to use these AIs for anything else necessarily because they're just not good at anything else.

80,000 Hours Podcast
Every AI Company's Safety Plan is 'Use AI to Make AI Safe'. Is That Crazy? | Ajeya Cotra

Yeah.