Rob Wiblin
๐ค SpeakerAppearances Over Time
Podcast Appearances
And these are the properties we think a really great benchmark would have.
And these are examples of benchmarks we think are good and not so good.
And we had a whole application form that was in some sense sort of guiding people or trying to elicit...
the information about their benchmark that we thought would be most important for determining whether or not it was really informative.
And mostly this was just be way more realistic, have way harder tasks than existing benchmarks.
Even if you think your tasks are hard enough, they're probably not hard enough.
There was a lot of push in that direction.
So it was a very opinionated and very detailed and very narrow RFP.
And we ended up making $25 million of grants through that, and then another two to three million from the companion RFP, which was just a broader, like, all kinds of, like, information from RCTs to surveys about AI's impact on the world.
And I'm, like, pretty happy with how that turned out.
It was, like you would expect, a lot of effort poured into, like, one sort of direction.
And, you know, if you were skeptical of this, like, sort of high effort sort of approach to grantmaking, there would be this, like...
You could argue that I could have just put in way less effort, funded twice as much volume in grants across 10 different areas, picking up the low-hanging fruit in all those areas.
Yeah, I think throughout this, so right around when I switched from doing mostly research to doing grant making, and especially when I was trying to ramp up this program area that had this more inside view, more understanding-oriented approach to AI safety research, Holden, who had been running the...
The AI team up to that point decided to step away and left the organization and he was my manager.
And I think that I had a working relationship with Holden that involved a lot of like arguing and discussing about the like substance of what I was working on.
And when he left, leadership was stretched more thin because someone in leadership was gone.
And I think the people who remained in the leadership team didn't have as much context and fluency with all this AI stuff as Holden did.
So I wrote up this big memo being like, oh, we should do AI safety grant making in a more understanding-oriented way, and we should develop inside views, and here's why I think that would be good.
And I think what I wanted was,