Menu
Sign In Search Podcasts Charts People & Topics Add Podcast API Blog Pricing

Rob Wiblin

๐Ÿ‘ค Speaker
1910 total appearances

Appearances Over Time

Podcast Appearances

80,000 Hours Podcast
Every AI Company's Safety Plan is 'Use AI to Make AI Safe'. Is That Crazy? | Ajeya Cotra

Like that whole way of thinking doesn't feel very like legitimate or interesting or like they sort of have a story where like that type of thinking always leads to a bias towards expecting things to go faster than they actually will because it's like hard for that kind of thinking to account for all the drag factors and all the bottlenecks.

80,000 Hours Podcast
Every AI Company's Safety Plan is 'Use AI to Make AI Safe'. Is That Crazy? | Ajeya Cotra

Whereas I think on the other side, people who think things will go faster feel like everyone is always kind of like blanket assuming that there are going to be bottlenecks.

80,000 Hours Podcast
Every AI Company's Safety Plan is 'Use AI to Make AI Safe'. Is That Crazy? | Ajeya Cotra

And then they bring up specific bottlenecks.

80,000 Hours Podcast
Every AI Company's Safety Plan is 'Use AI to Make AI Safe'. Is That Crazy? | Ajeya Cotra

And those specific bottlenecks, when you look into them, don't seem, you know, like they might slow things down from from some sort of absolute peak of a thousand percent growth.

80,000 Hours Podcast
Every AI Company's Safety Plan is 'Use AI to Make AI Safe'. Is That Crazy? | Ajeya Cotra

But they're not reasons to think that two percent is is where the ceiling is or even that 10 percent is where the ceiling is.

80,000 Hours Podcast
Every AI Company's Safety Plan is 'Use AI to Make AI Safe'. Is That Crazy? | Ajeya Cotra

So they also have this kind of error theory of the bottleneck subjection.

80,000 Hours Podcast
Every AI Company's Safety Plan is 'Use AI to Make AI Safe'. Is That Crazy? | Ajeya Cotra

One thing that I think will not address all of this, but is a step in the right direction, is really characterizing how and why and if AI is speeding up software and AI R&D.

80,000 Hours Podcast
Every AI Company's Safety Plan is 'Use AI to Make AI Safe'. Is That Crazy? | Ajeya Cotra

METR came out with an uplift RCT, which I think was the first of its kind or at least the largest and highest quality.

80,000 Hours Podcast
Every AI Company's Safety Plan is 'Use AI to Make AI Safe'. Is That Crazy? | Ajeya Cotra

where they had software developers split into two groups.

80,000 Hours Podcast
Every AI Company's Safety Plan is 'Use AI to Make AI Safe'. Is That Crazy? | Ajeya Cotra

One group was allowed to use AI.

80,000 Hours Podcast
Every AI Company's Safety Plan is 'Use AI to Make AI Safe'. Is That Crazy? | Ajeya Cotra

The other group was disallowed from using AI.

80,000 Hours Podcast
Every AI Company's Safety Plan is 'Use AI to Make AI Safe'. Is That Crazy? | Ajeya Cotra

And they studied how quickly those developers solved issues like tasks on their to-do list.

80,000 Hours Podcast
Every AI Company's Safety Plan is 'Use AI to Make AI Safe'. Is That Crazy? | Ajeya Cotra

And it actually turned out that in this case, AI slowed down their performance.

80,000 Hours Podcast
Every AI Company's Safety Plan is 'Use AI to Make AI Safe'. Is That Crazy? | Ajeya Cotra

which I thought was interesting.

80,000 Hours Podcast
Every AI Company's Safety Plan is 'Use AI to Make AI Safe'. Is That Crazy? | Ajeya Cotra

I don't expect that to remain true.

80,000 Hours Podcast
Every AI Company's Safety Plan is 'Use AI to Make AI Safe'. Is That Crazy? | Ajeya Cotra

But I'm glad we're starting to collect this data now, and I'm glad we're starting to sort of cross-check between benchmark-style evaluations, where AIs are given a bunch of tasks and sort of scored in an automated way, and evidence we can get about actual, like, in-context data

80,000 Hours Podcast
Every AI Company's Safety Plan is 'Use AI to Make AI Safe'. Is That Crazy? | Ajeya Cotra

real-world speedups.

80,000 Hours Podcast
Every AI Company's Safety Plan is 'Use AI to Make AI Safe'. Is That Crazy? | Ajeya Cotra

So I really want to get a lot more evidence about that of all kinds, like big uplift RCTs.

80,000 Hours Podcast
Every AI Company's Safety Plan is 'Use AI to Make AI Safe'. Is That Crazy? | Ajeya Cotra

It would be great if companies were into internally conducting RCTs on their own rollouts of internal products to see, are teams that get the latest AI product earlier more productive than teams that don't?

80,000 Hours Podcast
Every AI Company's Safety Plan is 'Use AI to Make AI Safe'. Is That Crazy? | Ajeya Cotra

Even self-report, which I think has a lot of limitations, is still something we should be gathering.