Ajeya Cotra
๐ค SpeakerAppearances Over Time
Podcast Appearances
If this doesn't work out, why do you think it's most likely to have failed for them?
unless they have really strong commitments.
But I guess other mechanisms would be that it's legally required.
At this point, the government basically insists that most of the compute go towards this, or at least like most of it's not going towards a recursive self-improvement.
Or I guess if the companies could reach some sort of agreement where they're saying, well, we would all like to spend more of our compute on this kind of thing.
So we're gonna have some, I guess, contract where we're gonna spend like 50%
all of our compute, and then we don't lose relative position in particular.
It might be a little illegal, but yeah, maybe we could carve out an exception to antitrust with this one.
I guess a different mechanism, in as much as the government is taking a massive interest, they could help to try to coordinate this one way or another.
I thought that you might say that the most likely reason for this to fail was that it just turned out that alignment is incredibly hard.
You get egregious misalignment even at relatively low levels of intelligence, and we don't really figure out how to fix that early enough to get useful work out of them.
I guess another way that they could end up actually just not making that much of an effort is if the window is relatively brief and it just takes a long time to get projects off the ground.
And they haven't really planned this ahead.
So, you know, they end up debating it back and forth.
And then by the time they've figured out that they actually do want to do this, I mean, I suppose it's like nominally in these various papers, but I wonder whether they actually are thinking ahead about how this would feel and whether they'll have the decision-making capability to decide to redirect enormous resources towards this other effort.
Okay, so that's the AI companies who I guess we're imagining would mostly be focused on this strategy for AI technical alignment.
But you've been thinking about this more in the context of open philanthropy and what niche it could fill.
What would open philanthropy need to do if this was dumping billions of dollars onto this plan but became its mainline strategy?
If I think about this kind of psychologically, I could imagine, you know, if I was leading open philanthropy, or I guess I was one of the donors being advised, and we did have these transparency requirements, and we did start getting a sense that an intelligence explosion might be kicking off.
I could imagine dithering for a long time, rather than deciding to commit billions of dollars towards this, because there's only a particular amount of money, there's only a particular size of endowment.