80,000 Hours Podcast
Every AI Company's Safety Plan is 'Use AI to Make AI Safe'. Is That Crazy? | Ajeya Cotra
It sounds like you were a bit on your own.
80,000 Hours Podcast
Every AI Company's Safety Plan is 'Use AI to Make AI Safe'. Is That Crazy? | Ajeya Cotra
What did you end up doing with that time?
80,000 Hours Podcast
Every AI Company's Safety Plan is 'Use AI to Make AI Safe'. Is That Crazy? | Ajeya Cotra
Yeah.
80,000 Hours Podcast
Every AI Company's Safety Plan is 'Use AI to Make AI Safe'. Is That Crazy? | Ajeya Cotra
What sorts of reflections did you have on, I guess, your career so far and your motivation and I guess like what had been difficult in 2023 and 2024?
80,000 Hours Podcast
Every AI Company's Safety Plan is 'Use AI to Make AI Safe'. Is That Crazy? | Ajeya Cotra
Yeah, it's like venture capital that OpenPhil is engaged in, in a way.
80,000 Hours Podcast
Every AI Company's Safety Plan is 'Use AI to Make AI Safe'. Is That Crazy? | Ajeya Cotra
Is that in part because Holden really wanted this deep research?
80,000 Hours Podcast
Every AI Company's Safety Plan is 'Use AI to Make AI Safe'. Is That Crazy? | Ajeya Cotra
He wanted to more deeply understand the idea.
80,000 Hours Podcast
Every AI Company's Safety Plan is 'Use AI to Make AI Safe'. Is That Crazy? | Ajeya Cotra
Both personally and he thought it was, I guess, healthy for the organization.
80,000 Hours Podcast
Every AI Company's Safety Plan is 'Use AI to Make AI Safe'. Is That Crazy? | Ajeya Cotra
In your notes, you said that you spent a fair bit of time reflecting in this period about what it had been that you liked about effective altruism, I guess, as an ecosystem and as a mentality, and what things you didn't like so much about it to tell us about that.
80,000 Hours Podcast
Every AI Company's Safety Plan is 'Use AI to Make AI Safe'. Is That Crazy? | Ajeya Cotra
So there's the being more compassionate to a wider range of beings, which I guess is still the case and probably still something you like about the effective altruist approach.
80,000 Hours Podcast
Every AI Company's Safety Plan is 'Use AI to Make AI Safe'. Is That Crazy? | Ajeya Cotra
But there was also going into enormous intellectual depth and just really debating things out.
80,000 Hours Podcast
Every AI Company's Safety Plan is 'Use AI to Make AI Safe'. Is That Crazy? | Ajeya Cotra
And then there was also the very high integrity about honesty, like not allowing any chicanery whatsoever.
80,000 Hours Podcast
Every AI Company's Safety Plan is 'Use AI to Make AI Safe'. Is That Crazy? | Ajeya Cotra
Yeah, it's not necessarily implied.
80,000 Hours Podcast
Every AI Company's Safety Plan is 'Use AI to Make AI Safe'. Is That Crazy? | Ajeya Cotra
I guess it's a practical question whether it is or not.
80,000 Hours Podcast
Every AI Company's Safety Plan is 'Use AI to Make AI Safe'. Is That Crazy? | Ajeya Cotra
I guess as things evolved, you found that the second one, the intellectual depth, was now lacking from your job.
80,000 Hours Podcast
Every AI Company's Safety Plan is 'Use AI to Make AI Safe'. Is That Crazy? | Ajeya Cotra
Were there other things that were changing that made you less enthusiastic?
80,000 Hours Podcast
Every AI Company's Safety Plan is 'Use AI to Make AI Safe'. Is That Crazy? | Ajeya Cotra
It feels like on some level, you really were a more natural grant recipient rather than a vicar.
80,000 Hours Podcast
Every AI Company's Safety Plan is 'Use AI to Make AI Safe'. Is That Crazy? | Ajeya Cotra
You should have gotten something to really go in deep on some questions.
80,000 Hours Podcast
Every AI Company's Safety Plan is 'Use AI to Make AI Safe'. Is That Crazy? | Ajeya Cotra
Yeah, it's not the case that the world's most impactful organizations are consistently incredibly transparent or even incredibly high integrity.
80,000 Hours Podcast
Every AI Company's Safety Plan is 'Use AI to Make AI Safe'. Is That Crazy? | Ajeya Cotra
I guess we should say for people who don't know that I guess over this period, the environment that Openfield was operating in became a lot more challenging and a lot more hostile, I guess.