Rob Wiblin
๐ค SpeakerAppearances Over Time
Podcast Appearances
And then at the last stage, it might be going, it might go back to the first scenario I talked about where it's like, oh, the narrow AIs that are just like savants at AI R&D hit upon an algorithm in almost like a blind search, like almost like if you imagine alpha fold, like it is brilliant at like figuring out how proteins fold, but isn't
Yeah, it isn't like broadly aware.
Like you could imagine such AIs or like an algorithmic search process hitting upon an architecture or like a training strategy that then can go foom really quickly.
And so in this lead up, you're like, yep, AI is accelerating AI R&D.
It's crunch time.
We have six months left.
We have three months left.
But like these AIs are not the AIs that you can use for anything useful.
Yeah, I think that the further afield you go from work that looks like doing ML research and doing software engineering, the greater a penalty they'll probably be.
The AIs currently are much better at helping my friends who do
ml research all day than me where i do you know weird thinking and like go on these kinds of podcasts and like write emails to people uh making like grant decisions and stuff like that it's it's much worse at that stuff you can see already that it's got like a very specialized skill profile
Fortunately, I do think that at least AI safety, there's a big chunk of AI safety research that does look very similar to ML research.
And I do think like, you know, my friends who are getting like big speed ups from AI are safety researchers and they're doing the kinds of work, control, alignment, et cetera, that I think will be like some of the most important things you want these AIs to be helping with at the very beginning.
But yeah, stuff like AI for epistemics, AI for moral philosophy, AI for negotiation, AI for policy design, all that stuff just may not be that good, doesn't necessarily have to be good by default, and that's a big concern of the plan.
Yeah, and in general, I think of AIs doing defensive labor as a prediction about the world that you want to try and be thinking about as you make your plans.
It's not a guarantee, and in many cases, the answer will be to do
to specialize now in doing the kinds of things that might be hardest for the AIs to do then.
And I think stuff like building a bunch of physical infrastructure to like stockpile a bunch of PPEs and vaccines and things like that is a prime candidate for something that just inherently takes a long lead time and that the AIs might not be that advantaged at at the point that they're good at doing the like scary things that it's meant to protect against.
Yeah, I think that in general you should expect AIs to be much better at things that there are tighter feedback loops on, where you can recognize success after a short period of time.
And that's one of the reasons why they're really, really good at coding, because you can just train them on this very hard-to-fake signal of, like, did the code run after you did whatever you did with it?