Rob Wiblin
๐ค SpeakerAppearances Over Time
Podcast Appearances
And in general, I think like idea generation versus actually executing on like a one year plan has some of this element of like you can read a white paper and be like, oh, yeah, that's pretty good.
And like you can push the thumbs up button and like generate an AI that's like pretty good at generating white papers that you think are like, you know, neat and like probably would work.
But it's like much harder to train the AI to like.
To some extent, that's right.
I think the reason that I focus so much on the intelligence explosion is twofold.
One is because at that point, I think we might have a pretty short clock to figure out a bunch of stuff.
And, you know, the default trajectory might look like 12 months to extremely powerful, uncontrollable superintelligence that can easily take over the world.
So it kind of changes our calculus of like you might you want to like focus on like very short term things rather than things that have long lead times, at least at crunch time, if not before.
The other thing is, I think crunch time.
can help alleviate some of the challenges we've been talking about with AIs not being good at the full spectrum of things we want them to be good at.
Because sort of by definition, at that point, AIs are really good at further AI R&D.
And one of the things we could do with AIs that are good at AI R&D, at least in most cases, is to try and direct their AI R&D towards like,
filling out the skill profile of AIs and getting them to be good at some of the types of things that we want them to be good at that they aren't so good at right now.
And so at that point, you might have like just much more capability at your disposal.
And it might be like much more worth putting in the effort to to try and like fine tune and scaffold and do all these other things to make your AI that's good at moral philosophy or your AI that's good at biodefense.
Yeah, I mean, just like how right now, you know, 80% plus of our grant money goes to salaries to pay humans to think about stuff and do research and do policy analysis and advocacy and all these other things.
You know, so too, in a few years, it might be the case that AIs are better than most of our human grantees and our money should mostly be going to buying API credits or renting GPU time to get the AIs to do like a similar distribution of activities.
Yeah, so I think that the plan I described is compatible with pausing at an intelligence explosion, like right at the brink of an intelligence explosion.
In fact, I would hope that we do that because I think by default, having 12 months to get everything in order is just not enough time.
But I think of it as...