Rob Wiblin
๐ค SpeakerAppearances Over Time
Podcast Appearances
Because sort of by definition, at that point, AIs are really good at further AI R&D.
And one of the things we could do with AIs that are good at AI R&D, at least in most cases, is to try and direct their AI R&D towards like,
filling out the skill profile of AIs and getting them to be good at some of the types of things that we want them to be good at that they aren't so good at right now.
And so at that point, you might have like just much more capability at your disposal.
And it might be like much more worth putting in the effort to to try and like fine tune and scaffold and do all these other things to make your AI that's good at moral philosophy or your AI that's good at biodefense.
Yeah, I mean, just like how right now, you know, 80% plus of our grant money goes to salaries to pay humans to think about stuff and do research and do policy analysis and advocacy and all these other things.
You know, so too, in a few years, it might be the case that AIs are better than most of our human grantees and our money should mostly be going to buying API credits or renting GPU time to get the AIs to do like a similar distribution of activities.
Yeah, so I think that the plan I described is compatible with pausing at an intelligence explosion, like right at the brink of an intelligence explosion.
In fact, I would hope that we do that because I think by default, having 12 months to get everything in order is just not enough time.
But I think of it as...
doing two things.
One is making the pause less binary.
So if you think of the default path as almost 100% of AI labor goes into further rounds of making AIs better and making more AIs and making more chips and so on.
And you think of a pause or a stop as 0% of the world's AI labor is going in towards those activities.
I think there's a whole spectrum between zero and 100%.
And then I think of it as doing another thing, which is it's sort of answering the question of what you do in the pause, which is like you do all this protective stuff and you have these AIs around to do it with.
And you might think, like once you have that frame of like making the pause less binary and thinking really hard about what you do during a pause, I think you might often end up thinking, oh, it's worth going a little bit further with AI capabilities because, you know, especially if we tilt the capabilities in a certain direction, we might at the end of that get AIs that are
much better than they are right now at biodefense, while still not being uncontrollable, still not being that scary.
And you can imagine a bunch of little pauses and little redirections and so on during that whole period.
And I would hope that at some point in the period, we do activities like policy coordination and so on that cause us to have longer in this sweet spot of AIs that are powerful enough to help with a lot of stuff, but not so powerful.