Rob Wiblin
๐ค SpeakerAppearances Over Time
Podcast Appearances
They're like...
you know, we've already lost the game.
Yeah, so I think that if a really clear early warning sign triggers that we are about to enter into this intelligence explosion, fast takeoff space where we go in the space of 12 months from AI, R&D, automation to vastly superhuman AI, then I would vote for, at that time,
shifting that trajectory to be 10 times longer or even longer than that and trying to make that transition as a society in 10 years instead of one year or 20 years instead of one year, I still wouldn't, and this is maybe a bit of a quibble, I still wouldn't advocate for pausing and then like hanging out for 10 years and then unpausing because I actually think that like slowly inching our way up is better than like pause then unpause and then having a jump.
But yeah, I would like going back to what we said about like how your default expectations of trajectories influence what you think should happen.
I think the default is going through this in like one year, and I would certainly rather it be 10 or 15 or 20 years.
But I think that the frame of using AIs to solve our problems applies regardless of whether you're sort of white-knuckling it in one year or maybe eking out an extra two months, or if you manage to get the consensus and the common knowledge that allows the world to step through it in 10 years.
I think that if it fails, it's probably most likely to fail because they just didn't actually do a big redirection from using AIs for further AI capabilities to putting a lot of energy towards using them for AI safety.
Because they say this is their plan, but they don't really have any...
quantitative claims about like at that stage, what fraction of their AI labor or their human labor for that matter is going to go towards the safety versus the further acceleration.
And they'll be facing tremendous pressure at that point from their competitors to stay ahead.
My guess is that unless they have much more robust commitments than they have right now, they probably just won't be directing that much of their AI labor.
If they have 100,000 really smart human equivalents, maybe only 100 of them are working on AI safety, which is maybe still more than they had before in human labor, but not that much compared to how quickly things are going.
Yeah, I mean, I think that particular contract is probably going to run into big antitrust issues.
Yeah, I think that's a possibility.
I do think it's a bit tough.
This is not the kind of thing it's super easy to make laws about because it's really not a box checking exercise.
When you write the legislation that half the compute must be spent on safety rather than capabilities,
Like, what do you count as safety research?
And like, how are you enforcing this?