Menu
Sign In Search Podcasts Charts People & Topics Add Podcast API Blog Pricing

Rob Wiblin

๐Ÿ‘ค Speaker
1910 total appearances

Appearances Over Time

Podcast Appearances

80,000 Hours Podcast
Every AI Company's Safety Plan is 'Use AI to Make AI Safe'. Is That Crazy? | Ajeya Cotra

And I think I have sympathy for that.

80,000 Hours Podcast
Every AI Company's Safety Plan is 'Use AI to Make AI Safe'. Is That Crazy? | Ajeya Cotra

Hopefully I represented that perspective reasonably well, but I just feel like in my life, in my experience, like having done the homework, like really qualitatively changes like

80,000 Hours Podcast
Every AI Company's Safety Plan is 'Use AI to Make AI Safe'. Is That Crazy? | Ajeya Cotra

the details of the decisions you make in ways that I think can be really high impact.

80,000 Hours Podcast
Every AI Company's Safety Plan is 'Use AI to Make AI Safe'. Is That Crazy? | Ajeya Cotra

Like one thing that I'm able to do having like gone through the whole rigmarole of like forming views is

80,000 Hours Podcast
Every AI Company's Safety Plan is 'Use AI to Make AI Safe'. Is That Crazy? | Ajeya Cotra

work with researchers to find like the most awesome version of like their idea by the lights of my goals and like pitch them on that and like sort of co-create grant opportunities.

80,000 Hours Podcast
Every AI Company's Safety Plan is 'Use AI to Make AI Safe'. Is That Crazy? | Ajeya Cotra

And I think there's just like something that I maybe won't be like great at defending, but I just feel like there are other like nebulous benefits beyond that.

80,000 Hours Podcast
Every AI Company's Safety Plan is 'Use AI to Make AI Safe'. Is That Crazy? | Ajeya Cotra

And I like really like operating that way.

80,000 Hours Podcast
Every AI Company's Safety Plan is 'Use AI to Make AI Safe'. Is That Crazy? | Ajeya Cotra

Late 2023, yeah.

80,000 Hours Podcast
Every AI Company's Safety Plan is 'Use AI to Make AI Safe'. Is That Crazy? | Ajeya Cotra

Or just in tension with, in the short term, making a large volume of grants.

80,000 Hours Podcast
Every AI Company's Safety Plan is 'Use AI to Make AI Safe'. Is That Crazy? | Ajeya Cotra

So I think I ended up pursuing a compromise where one thing that just comes with the territory of this role is that there have been grantees that we made grants to in the past that are up for renewal.

80,000 Hours Podcast
Every AI Company's Safety Plan is 'Use AI to Make AI Safe'. Is That Crazy? | Ajeya Cotra

And part of the responsibility of being the person in charge of this program area is that you investigate those renewals and make decisions about whether we should keep the grantees on or not.

80,000 Hours Podcast
Every AI Company's Safety Plan is 'Use AI to Make AI Safe'. Is That Crazy? | Ajeya Cotra

And those grants, I tried to...

80,000 Hours Podcast
Every AI Company's Safety Plan is 'Use AI to Make AI Safe'. Is That Crazy? | Ajeya Cotra

follow like what an OpenPhil canonical decision making process would be there.

80,000 Hours Podcast
Every AI Company's Safety Plan is 'Use AI to Make AI Safe'. Is That Crazy? | Ajeya Cotra

And so I tried to pursue kind of a barbell strategy for a while where like on the one hand, there were either renewals or people who like knew us, who reached out to us to ask us to consider grants, where I wouldn't hold myself to the standard of like really

80,000 Hours Podcast
Every AI Company's Safety Plan is 'Use AI to Make AI Safe'. Is That Crazy? | Ajeya Cotra

on the technical merits, like understanding and defending the proposal, but would lean more on heuristics like, this person seems aligned with the goal of reducing AI takeover risk, this person has a broadly good research track record, and so on, and try to make those grants relatively quickly.

80,000 Hours Podcast
Every AI Company's Safety Plan is 'Use AI to Make AI Safe'. Is That Crazy? | Ajeya Cotra

But then I would also be trying to develop a different funding program or some grants that I really wanted to bet on where I would try and hold myself to that standard and try and really write down why I thought this was a good thing to pursue.

80,000 Hours Podcast
Every AI Company's Safety Plan is 'Use AI to Make AI Safe'. Is That Crazy? | Ajeya Cotra

And it turned out that the second thing basically turned into making a bet in late 23 to mid 24 of AI agent capability benchmarks and other ways of gaining evidence about AI's impact on the world.

80,000 Hours Podcast
Every AI Company's Safety Plan is 'Use AI to Make AI Safe'. Is That Crazy? | Ajeya Cotra

Yeah, yeah.

80,000 Hours Podcast
Every AI Company's Safety Plan is 'Use AI to Make AI Safe'. Is That Crazy? | Ajeya Cotra

So I launched this request for proposals, which OpenPhil has done technical safety requests for proposals before, but this was by far the narrowest and most sort of like deeply justified technical RFP that we had put out at that time, where I was like,

80,000 Hours Podcast
Every AI Company's Safety Plan is 'Use AI to Make AI Safe'. Is That Crazy? | Ajeya Cotra

We are looking for benchmarks that test agents, not just models that are chatbots.