Rob Wiblin
๐ค SpeakerAppearances Over Time
Podcast Appearances
ml research all day than me where i do you know weird thinking and like go on these kinds of podcasts and like write emails to people uh making like grant decisions and stuff like that it's it's much worse at that stuff you can see already that it's got like a very specialized skill profile
Fortunately, I do think that at least AI safety, there's a big chunk of AI safety research that does look very similar to ML research.
And I do think like, you know, my friends who are getting like big speed ups from AI are safety researchers and they're doing the kinds of work, control, alignment, et cetera, that I think will be like some of the most important things you want these AIs to be helping with at the very beginning.
But yeah, stuff like AI for epistemics, AI for moral philosophy, AI for negotiation, AI for policy design, all that stuff just may not be that good, doesn't necessarily have to be good by default, and that's a big concern of the plan.
Yeah, and in general, I think of AIs doing defensive labor as a prediction about the world that you want to try and be thinking about as you make your plans.
It's not a guarantee, and in many cases, the answer will be to do
to specialize now in doing the kinds of things that might be hardest for the AIs to do then.
And I think stuff like building a bunch of physical infrastructure to like stockpile a bunch of PPEs and vaccines and things like that is a prime candidate for something that just inherently takes a long lead time and that the AIs might not be that advantaged at at the point that they're good at doing the like scary things that it's meant to protect against.
Yeah, I think that in general you should expect AIs to be much better at things that there are tighter feedback loops on, where you can recognize success after a short period of time.
And that's one of the reasons why they're really, really good at coding, because you can just train them on this very hard-to-fake signal of, like, did the code run after you did whatever you did with it?
And in general, I think like idea generation versus actually executing on like a one year plan has some of this element of like you can read a white paper and be like, oh, yeah, that's pretty good.
And like you can push the thumbs up button and like generate an AI that's like pretty good at generating white papers that you think are like, you know, neat and like probably would work.
But it's like much harder to train the AI to like.
To some extent, that's right.
I think the reason that I focus so much on the intelligence explosion is twofold.
One is because at that point, I think we might have a pretty short clock to figure out a bunch of stuff.
And, you know, the default trajectory might look like 12 months to extremely powerful, uncontrollable superintelligence that can easily take over the world.
So it kind of changes our calculus of like you might you want to like focus on like very short term things rather than things that have long lead times, at least at crunch time, if not before.
The other thing is, I think crunch time.
can help alleviate some of the challenges we've been talking about with AIs not being good at the full spectrum of things we want them to be good at.