Rob Wiblin
๐ค SpeakerAppearances Over Time
Podcast Appearances
We need more.
Yeah.
So, um,
I think one just extremely important factor is at that point in time,
how good are AI systems at everything besides AI R&D?
So the alarm has sounded and AI, we learned that AI has like fully or almost fully automated R&D at the leading AI lab, perhaps all the AI labs.
This is causing those labs to go way faster than they were going with like mostly human driven progress in the previous era.
So, at that point in time, whatever AI progress you thought was going to be made by default in the next 10 years or the next 20 years or the next 30 years might be made in a year or two or even six months, depending on how much AI is speeding everything up.
So at this stage, AIs might not be that dangerous, but we might be about to move very quickly through the point in time where they're not so dangerous to the point in time where they have sort of godlike abilities.
And I think that what we want to do as a society, if we gain confidence that we're sort of at the starting point of this intelligence explosion,
is to redirect as much of that AI labor as we can from further AI R&D to things that could help protect us from future generations of AIs, both in terms of AI takeover risk and also in terms of a wide range of other problems that might be created for society by increasingly powerful AI.
And at that point, it's still not in the sort of narrow, selfish interests of whichever company is in the lead to do that, because if they were to slow down unilaterally, then someone behind them could catch up.
But hopefully, if we have...
If the alarm has sounded and we have like a clear picture of, you know, we have six months or 12 months or 18 months until radical superintelligence, then this might be like a window of opportunity to coordinate to like use AIs for protective activities instead of further AI capability acceleration.
Yeah, and I think that a lot of people who are more concerned about AI risk are very dismissive of this plan.
It sort of sounds like a crazy plan.
It's like really flying by the seat of your pants, expecting the thing that's creating the problem to solve the problem.
But in a sense, I do think...
Humanity has repeatedly used sort of general purpose technologies that both like created problems to solve those problems.
Like, you know, automobiles, something as mundane as that.