80,000 Hours Podcast
Every AI Company's Safety Plan is 'Use AI to Make AI Safe'. Is That Crazy? | Ajeya Cotra
It's very similar to how, right now, we don't spend a huge fraction of society's GDP on biodefense and cyberdefense and these other things, and moral philosophy.
80,000 Hours Podcast
Every AI Company's Safety Plan is 'Use AI to Make AI Safe'. Is That Crazy? | Ajeya Cotra
It's just like, that's not what people want to pay for.
80,000 Hours Podcast
Every AI Company's Safety Plan is 'Use AI to Make AI Safe'. Is That Crazy? | Ajeya Cotra
And AI is like another, it's just a thing that accelerates the creation of products and services people want to pay for.
80,000 Hours Podcast
Every AI Company's Safety Plan is 'Use AI to Make AI Safe'. Is That Crazy? | Ajeya Cotra
And this isn't very high on the list.
80,000 Hours Podcast
Every AI Company's Safety Plan is 'Use AI to Make AI Safe'. Is That Crazy? | Ajeya Cotra
Yeah, I think that that is likely to come up, especially for physical defenses, like manufacturing PPE, or scaling up the ability to rapidly create medical countermeasures.
80,000 Hours Podcast
Every AI Company's Safety Plan is 'Use AI to Make AI Safe'. Is That Crazy? | Ajeya Cotra
And then also for
80,000 Hours Podcast
Every AI Company's Safety Plan is 'Use AI to Make AI Safe'. Is That Crazy? | Ajeya Cotra
social and policy things.
80,000 Hours Podcast
Every AI Company's Safety Plan is 'Use AI to Make AI Safe'. Is That Crazy? | Ajeya Cotra
So I can imagine that AIs could be very helpful in figuring out what kind of agreement between the U.S.
80,000 Hours Podcast
Every AI Company's Safety Plan is 'Use AI to Make AI Safe'. Is That Crazy? | Ajeya Cotra
and China would be mutually beneficial and how we could enforce it.
80,000 Hours Podcast
Every AI Company's Safety Plan is 'Use AI to Make AI Safe'. Is That Crazy? | Ajeya Cotra
But the way human decision-making works still probably requires humans from the U.S.
80,000 Hours Podcast
Every AI Company's Safety Plan is 'Use AI to Make AI Safe'. Is That Crazy? | Ajeya Cotra
and China to come together and talk about it, have a conference or convening, and come to a decision that they ratify and they feel good about.
80,000 Hours Podcast
Every AI Company's Safety Plan is 'Use AI to Make AI Safe'. Is That Crazy? | Ajeya Cotra
And that could be a bottleneck.
80,000 Hours Podcast
Every AI Company's Safety Plan is 'Use AI to Make AI Safe'. Is That Crazy? | Ajeya Cotra
But it's also, I do think for deep theoretical problems, you can speed things up by having efforts going in parallel, but the right solution that's out there somewhere involves multiple leaps where it's hard to think of the next insight without having the foundation of the earlier insight.
80,000 Hours Podcast
Every AI Company's Safety Plan is 'Use AI to Make AI Safe'. Is That Crazy? | Ajeya Cotra
So really, even if you have 100 AIs working in parallel, what will happen is that one of them comes up with the first step of the insight,
80,000 Hours Podcast
Every AI Company's Safety Plan is 'Use AI to Make AI Safe'. Is That Crazy? | Ajeya Cotra
And then everyone is working in parallel on finding the next insight.
80,000 Hours Podcast
Every AI Company's Safety Plan is 'Use AI to Make AI Safe'. Is That Crazy? | Ajeya Cotra
But you still need to go three or four steps in.
80,000 Hours Podcast
Every AI Company's Safety Plan is 'Use AI to Make AI Safe'. Is That Crazy? | Ajeya Cotra
Yeah, I think that in general, you want to be thinking about what would the AIs at the time be like most comparatively disadvantaged in.
80,000 Hours Podcast
Every AI Company's Safety Plan is 'Use AI to Make AI Safe'. Is That Crazy? | Ajeya Cotra
They'll have like all these advantages over us.
80,000 Hours Podcast
Every AI Company's Safety Plan is 'Use AI to Make AI Safe'. Is That Crazy? | Ajeya Cotra
They'll understand the situation much better at that point in time than we do now.
80,000 Hours Podcast
Every AI Company's Safety Plan is 'Use AI to Make AI Safe'. Is That Crazy? | Ajeya Cotra
They'll be able to think faster, move faster and so on.