Rob Wiblin
๐ค SpeakerAppearances Over Time
Podcast Appearances
Maybe the only thing they're good at is making it so that future generations of AIs have better sample efficiency and can learn new things more efficiently.
Then you could have a period of six months or a year where you know this is happening and you have these AIs, but you're still sort of hurtling towards a highly general superintelligence without being able to use these AIs for anything else necessarily because they're just not good at anything else.
Yeah.
And then at the last stage, it might be going, it might go back to the first scenario I talked about where it's like, oh, the narrow AIs that are just like savants at AI R&D hit upon an algorithm in almost like a blind search, like almost like if you imagine alpha fold, like it is brilliant at like figuring out how proteins fold, but isn't
Yeah, it isn't like broadly aware.
Like you could imagine such AIs or like an algorithmic search process hitting upon an architecture or like a training strategy that then can go foom really quickly.
And so in this lead up, you're like, yep, AI is accelerating AI R&D.
It's crunch time.
We have six months left.
We have three months left.
But like these AIs are not the AIs that you can use for anything useful.
Yeah, I think that the further afield you go from work that looks like doing ML research and doing software engineering, the greater a penalty they'll probably be.
The AIs currently are much better at helping my friends who do
ml research all day than me where i do you know weird thinking and like go on these kinds of podcasts and like write emails to people uh making like grant decisions and stuff like that it's it's much worse at that stuff you can see already that it's got like a very specialized skill profile
Fortunately, I do think that at least AI safety, there's a big chunk of AI safety research that does look very similar to ML research.
And I do think like, you know, my friends who are getting like big speed ups from AI are safety researchers and they're doing the kinds of work, control, alignment, et cetera, that I think will be like some of the most important things you want these AIs to be helping with at the very beginning.
But yeah, stuff like AI for epistemics, AI for moral philosophy, AI for negotiation, AI for policy design, all that stuff just may not be that good, doesn't necessarily have to be good by default, and that's a big concern of the plan.
Yeah, and in general, I think of AIs doing defensive labor as a prediction about the world that you want to try and be thinking about as you make your plans.
It's not a guarantee, and in many cases, the answer will be to do
to specialize now in doing the kinds of things that might be hardest for the AIs to do then.