Rob Wiblin
๐ค SpeakerAppearances Over Time
Podcast Appearances
Hey, like.
it's probably a big deal when the AIs start to get really good at any given good thing we're funding.
And once we start to see signs of life there, we should be prepared to potentially go really big on that.
And like you said earlier, I do think crunch time isn't 100% a special thing.
We absolutely shouldn't be waiting until crunch time to do anything at all.
It's just the prediction that crunch time is the point when a lot of things that were hard to automate before become easier to automate.
So if it turns out, for example, that AI is really good at math research, which I think is plausible, then maybe we should be trying to deliberately shift our technical grant making towards more mathy kinds of technical grant making because that is an area where you can churn a lot more.
That's just so much more tractable.
So I think just having a function that is looking out for these things and is maybe just poking OpenPhil and OpenPhil's grantees
to consider shifting their work towards more easily automatable things like consider repeatedly testing whether their work can be automated is a big thing.
And then I think I could imagine down the line something like even just having separate accounting for like
the rest of our grant making versus grant making that is going towards paying for AIs for our grantees.
We already pay for like ChatGPT Pro subscriptions and like ChatGPT API credits for tons and tons of grantees.
I think just making it a bit more salient in our minds,
what fraction of our giving is going towards that?
And do we endorse its size?
And is there any place where we should be going bigger?
And are we on track?
Is the percentage climbing the way we think it should be?
Does that seem in line with the way AI capabilities are climbing?