Rob Wiblin
๐ค SpeakerAppearances Over Time
Podcast Appearances
the decision-making process that we have in place as an org, and then it's approved.
And it's just like, if the right thing to do is to spend a billion dollars on some particular strain of work that's super automatable,
It just like that isn't even like you wouldn't trust some random junior person to make that call.
You need to you might need to have just a different process for that.
And I don't know what that process would look like, but I think that would be like one thing to figure out.
Yeah.
So I think there's two possibilities here.
One possibility is that by the time it's the right move to dump a bunch of money on crunch time AI labor, OpenPhil itself has already been largely automated.
And that's actually like an easy world because in that world, we just have a visceral sense that AIs are really helpful because they like, we've like, you know, maybe we've slowed down our junior hiring and like all our program associates are AIs right now.
And like
You know, we are totally transformed as an organization.
So the like evidence, like in the conviction to pull the trigger might be easier to achieve.
And then actually we have a bunch of labor.
So maybe we have like a thousand people on the like AI team instead of like 45 that we have now.
And they can like, you know, figure out all this stuff much more quickly.
But I think the concerning possibility is actually there's jaggedness where maybe AI is extremely good at math and maybe AI is extremely good at technical AI safety and certain specific kinds of manufacturing that could be really useful for a PPE play.
But it's not that.
We haven't automated ourselves.
It's not that good at doing our jobs because there wasn't much of that stuff in the training data.
We're just not...