Rob Wiblin
๐ค SpeakerAppearances Over Time
Podcast Appearances
And meanwhile, you know, the sort of accelerationists think that by default, diffusing and capturing the benefits of AI will take like 50 years or 100 years, and they want to speed it up to take 35 years, you know.
I think that probably in the early 2030s, we are going to see what Ryan Greenblatt calls top human expert dominating AI, which is an AI system that can do tasks that you can do remotely from a computer better than any human expert.
So it's better at remote virology tasks than the best virologists, better at remote software engineering tasks than the best software engineers, and so on for all the different domains.
And by that time, I feel like probably the world has already accelerated and changed and sort of narrower and weaker AI systems have already penetrated in a bunch of places and we're looking at a pretty different world.
But at that point, I think things can go much, much faster because I think top human expert dominating AIs in the cognitive domain could probably...
use human physical labor to build robotic physical actuators for themselves.
That would be one of the things that, whether the AIs have already taken over and are acting on their own, or whether humans are still in control of the AIs, I think that would be a goal they would have of automating the physical as well.
And I think I have pretty wide uncertainty on like exactly how hard that'll be.
But whenever I check in on the field of robotics, I actually feel like robotics is like progressing pretty quickly.
And it's taking off for the same reasons that sort of cognitive AI is taking off.
It's like large models, lots of data, imitation, large scale is helping robotics a lot.
So I imagine that you can pretty quickly, maybe within a year, maybe within a couple years, get to the point where these superhuman AIs are controlling a bunch of physical actuators that allow them to close the loop of making more of themselves, doing all the work required to run the factories that print out the chips that then run the AIs, and doing all the repair work on that, and gathering the raw materials on that.
So I really recommend the post, Three Types of Intelligence Explosion, that's by Tom Davidson on Forethought, where he makes the point that, like, you know, we talk a lot about the sort of promise and the danger of AIs automating AI R&D and, like, you know, automating the process of making better AIs.
But that's only one feedback loop that is required to fully close the loop of making more AIs, because we're talking about software that makes the transformer architecture slightly more efficient or gathers better data to train the AIs on.
But the AIs are also running on chips, which are printed in these chip factories at NVIDIA.
And those factories have machines that are built by other machines that are built by other machines and ultimately go down to raw materials.
And I think that something we don't talk about very much, because it'll happen afterward, is how hard it would be for the AIs to automate that entire stack, the full stack, and not just the software stack.
Yeah, I feel like at the end of the day, the different parties tend to lean on two different like pretty sort of simple priors or simple like outside views that are kind of different outside views.
And I would say that the party, the sort of group that expects things to be a lot slower
tends to lean on, well, for the last 100, 150 years in frontier economies, we've seen 2% growth.