Rob Wiblin
๐ค SpeakerAppearances Over Time
Podcast Appearances
doing two things.
One is making the pause less binary.
So if you think of the default path as almost 100% of AI labor goes into further rounds of making AIs better and making more AIs and making more chips and so on.
And you think of a pause or a stop as 0% of the world's AI labor is going in towards those activities.
I think there's a whole spectrum between zero and 100%.
And then I think of it as doing another thing, which is it's sort of answering the question of what you do in the pause, which is like you do all this protective stuff and you have these AIs around to do it with.
And you might think, like once you have that frame of like making the pause less binary and thinking really hard about what you do during a pause, I think you might often end up thinking, oh, it's worth going a little bit further with AI capabilities because, you know, especially if we tilt the capabilities in a certain direction, we might at the end of that get AIs that are
much better than they are right now at biodefense, while still not being uncontrollable, still not being that scary.
And you can imagine a bunch of little pauses and little redirections and so on during that whole period.
And I would hope that at some point in the period, we do activities like policy coordination and so on that cause us to have longer in this sweet spot of AIs that are powerful enough to help with a lot of stuff, but not so powerful.
They're like...
you know, we've already lost the game.
Yeah, so I think that if a really clear early warning sign triggers that we are about to enter into this intelligence explosion, fast takeoff space where we go in the space of 12 months from AI, R&D, automation to vastly superhuman AI, then I would vote for, at that time,
shifting that trajectory to be 10 times longer or even longer than that and trying to make that transition as a society in 10 years instead of one year or 20 years instead of one year, I still wouldn't, and this is maybe a bit of a quibble, I still wouldn't advocate for pausing and then like hanging out for 10 years and then unpausing because I actually think that like slowly inching our way up is better than like pause then unpause and then having a jump.
But yeah, I would like going back to what we said about like how your default expectations of trajectories influence what you think should happen.
I think the default is going through this in like one year, and I would certainly rather it be 10 or 15 or 20 years.
But I think that the frame of using AIs to solve our problems applies regardless of whether you're sort of white-knuckling it in one year or maybe eking out an extra two months, or if you manage to get the consensus and the common knowledge that allows the world to step through it in 10 years.
I think that if it fails, it's probably most likely to fail because they just didn't actually do a big redirection from using AIs for further AI capabilities to putting a lot of energy towards using them for AI safety.
Because they say this is their plan, but they don't really have any...
quantitative claims about like at that stage, what fraction of their AI labor or their human labor for that matter is going to go towards the safety versus the further acceleration.