Menu
Sign In Search Podcasts Charts People & Topics Add Podcast API Blog Pricing

Rob Wiblin

๐Ÿ‘ค Speaker
1910 total appearances

Appearances Over Time

Podcast Appearances

80,000 Hours Podcast
Every AI Company's Safety Plan is 'Use AI to Make AI Safe'. Is That Crazy? | Ajeya Cotra

And meanwhile, you know, the sort of accelerationists think that by default, diffusing and capturing the benefits of AI will take like 50 years or 100 years, and they want to speed it up to take 35 years, you know.

80,000 Hours Podcast
Every AI Company's Safety Plan is 'Use AI to Make AI Safe'. Is That Crazy? | Ajeya Cotra

I think that probably in the early 2030s, we are going to see what Ryan Greenblatt calls top human expert dominating AI, which is an AI system that can do tasks that you can do remotely from a computer better than any human expert.

80,000 Hours Podcast
Every AI Company's Safety Plan is 'Use AI to Make AI Safe'. Is That Crazy? | Ajeya Cotra

So it's better at remote virology tasks than the best virologists, better at remote software engineering tasks than the best software engineers, and so on for all the different domains.

80,000 Hours Podcast
Every AI Company's Safety Plan is 'Use AI to Make AI Safe'. Is That Crazy? | Ajeya Cotra

And by that time, I feel like probably the world has already accelerated and changed and sort of narrower and weaker AI systems have already penetrated in a bunch of places and we're looking at a pretty different world.

80,000 Hours Podcast
Every AI Company's Safety Plan is 'Use AI to Make AI Safe'. Is That Crazy? | Ajeya Cotra

But at that point, I think things can go much, much faster because I think top human expert dominating AIs in the cognitive domain could probably...

80,000 Hours Podcast
Every AI Company's Safety Plan is 'Use AI to Make AI Safe'. Is That Crazy? | Ajeya Cotra

use human physical labor to build robotic physical actuators for themselves.

80,000 Hours Podcast
Every AI Company's Safety Plan is 'Use AI to Make AI Safe'. Is That Crazy? | Ajeya Cotra

That would be one of the things that, whether the AIs have already taken over and are acting on their own, or whether humans are still in control of the AIs, I think that would be a goal they would have of automating the physical as well.

80,000 Hours Podcast
Every AI Company's Safety Plan is 'Use AI to Make AI Safe'. Is That Crazy? | Ajeya Cotra

And I think I have pretty wide uncertainty on like exactly how hard that'll be.

80,000 Hours Podcast
Every AI Company's Safety Plan is 'Use AI to Make AI Safe'. Is That Crazy? | Ajeya Cotra

But whenever I check in on the field of robotics, I actually feel like robotics is like progressing pretty quickly.

80,000 Hours Podcast
Every AI Company's Safety Plan is 'Use AI to Make AI Safe'. Is That Crazy? | Ajeya Cotra

And it's taking off for the same reasons that sort of cognitive AI is taking off.

80,000 Hours Podcast
Every AI Company's Safety Plan is 'Use AI to Make AI Safe'. Is That Crazy? | Ajeya Cotra

It's like large models, lots of data, imitation, large scale is helping robotics a lot.

80,000 Hours Podcast
Every AI Company's Safety Plan is 'Use AI to Make AI Safe'. Is That Crazy? | Ajeya Cotra

So I imagine that you can pretty quickly, maybe within a year, maybe within a couple years, get to the point where these superhuman AIs are controlling a bunch of physical actuators that allow them to close the loop of making more of themselves, doing all the work required to run the factories that print out the chips that then run the AIs, and doing all the repair work on that, and gathering the raw materials on that.

80,000 Hours Podcast
Every AI Company's Safety Plan is 'Use AI to Make AI Safe'. Is That Crazy? | Ajeya Cotra

So I really recommend the post, Three Types of Intelligence Explosion, that's by Tom Davidson on Forethought, where he makes the point that, like, you know, we talk a lot about the sort of promise and the danger of AIs automating AI R&D and, like, you know, automating the process of making better AIs.

80,000 Hours Podcast
Every AI Company's Safety Plan is 'Use AI to Make AI Safe'. Is That Crazy? | Ajeya Cotra

But that's only one feedback loop that is required to fully close the loop of making more AIs, because we're talking about software that makes the transformer architecture slightly more efficient or gathers better data to train the AIs on.

80,000 Hours Podcast
Every AI Company's Safety Plan is 'Use AI to Make AI Safe'. Is That Crazy? | Ajeya Cotra

But the AIs are also running on chips, which are printed in these chip factories at NVIDIA.

80,000 Hours Podcast
Every AI Company's Safety Plan is 'Use AI to Make AI Safe'. Is That Crazy? | Ajeya Cotra

And those factories have machines that are built by other machines that are built by other machines and ultimately go down to raw materials.

80,000 Hours Podcast
Every AI Company's Safety Plan is 'Use AI to Make AI Safe'. Is That Crazy? | Ajeya Cotra

And I think that something we don't talk about very much, because it'll happen afterward, is how hard it would be for the AIs to automate that entire stack, the full stack, and not just the software stack.

80,000 Hours Podcast
Every AI Company's Safety Plan is 'Use AI to Make AI Safe'. Is That Crazy? | Ajeya Cotra

Yeah, I feel like at the end of the day, the different parties tend to lean on two different like pretty sort of simple priors or simple like outside views that are kind of different outside views.

80,000 Hours Podcast
Every AI Company's Safety Plan is 'Use AI to Make AI Safe'. Is That Crazy? | Ajeya Cotra

And I would say that the party, the sort of group that expects things to be a lot slower

80,000 Hours Podcast
Every AI Company's Safety Plan is 'Use AI to Make AI Safe'. Is That Crazy? | Ajeya Cotra

tends to lean on, well, for the last 100, 150 years in frontier economies, we've seen 2% growth.