Ajeya Cotra
๐ค SpeakerAppearances Over Time
Podcast Appearances
I guess another worry would be that the AI models end up being able to cause trouble before they end up being capable enough to figure out solutions.
A classic case there would be, imagine that we put a lot of effort into, I guess it would be a bit stupid to do this, but we put a lot of effort into training an AI model that's extremely good at developing new viruses or new bacteria, basically changing diseases to make them worse.
I mean, there are people who are using AI to develop new viruses.
I guess they're using it to develop vaccines.
medical treatments, but that sort of stuff can then be repurposed for other things.
But if that sort of highly specialized model arrives first before you end up with a model that has a sufficient understanding of all the society and biology and medicine to figure out what the good countermeasures are, then we'll need a different approach than this one.
Yeah, that was going to be another concern of mine that in as much as the AIs are very helpful, you might imagine that they're very helpful at the idea generation or the strategizing stage, but they might still be quite bad at actually running a business or actually figuring out how to do all of the manufacturing.
So if they could come up with a great strategy for countervailing new bio weapons where they're like, here's the widget that you should use.
Go and make 10 billion of them.
They're like, can you help us with that?
It's like, no, I'm not very good at that.
Good luck.
run the team of like thousands of humans and robots that are like actually executing on the plan why is the crunch time aspect or you know the intelligence explosion taking off actually even relevant to when we would want to start doing this because you might just think if ai can help us do research or do work to solve any of these problems then we as soon as it's able to do that we want to do it like whether or not an intelligence explosion is is kicking off or not
So you're thinking about this strategy not just as a description, I guess, what other organizations potentially should work on or as a description of what the AI companies are already planning to do, but also, I guess, because you think maybe this should influence what open philanthropy plans to do over a couple of years and potentially that open philanthropy's best play might be to have billions of dollars waiting at this relevant crunch time and then disperse them incredibly quickly buying a whole lot of compute to get AIs to solve these problems.
So an alternative approach to this would be that at the point that we get a heads up that we think an intelligence explosion is beginning to take place, we do everything we can to pause at that stage, to slow down, basically to arrest that process, so that rather than having to rush in three or six months, get the AIs to fix all of these issues, we buy ourselves a bunch more time.
Why not adopt that as the primary approach instead?
So yeah, we should probably clarify that, although you think this is among our best bets, in an ideal world, do you think that we would go substantially slower through all of this?
Because, you know, as good a plan as this might be, we'll really be white knuckling it and not be confident that it's necessarily going to work.
Yeah, because in as much as we're slowing down to do something, this is a big part of the thing that we're slowing down to do.
So this is a big part of the company's plan for technical alignment.