Rob, Luisa, and the 80000 Hours team
๐ค SpeakerAppearances Over Time
Podcast Appearances
But I suppose it's like a race against time before we perhaps like spread out off of Earth again.
Or it sounds more like death finds a way or disorganization finds a way.
I mean, I guess to be, we're super into speculative land here, but to be more concrete in the speculation, if you have the US and China like leading in the race to develop AGI and super intelligence, well, I suppose one story is if you have like incredibly fast takeoff or incredibly fast or recursive self-improvement, then like one of them could pull very far ahead of the other and indeed by extension, very far ahead of everyone else.
And then there'll be the temptation for them to just grab power globally and control everyone, basically not allow any other independent
political or military powers that could threaten them in future is one possibility.
The other one is if like US and China remain like somewhat at parity with one another, there would certainly be a temptation for them to basically split the earth between them or split the resources between them and disempower everyone else if they can get away with it.
I mean, I guess there are other like middle powers or other like regional powers that might be able to resist that to a significant extent.
But I don't know.
Yeah, I think those seem like pretty, I guess, especially the second seems like quite a plausible pathway to me.
Well, I guess in that case, you really want to avoid extinction or destruction of complex life somehow.
That's the only really super bad scenario.
Exactly.
You haven't mentioned yet improving coordination mechanisms as a way that we could avoid these negative competitive dynamics in future.
It seems like a very obvious thing would be like... I mean, actually, in the interview with Carl Schulman, again, like a year or two ago now, he was talking about how...
If we could come up with technology where everyone could inspect an AI and see some AI model and see that it would, in all circumstances, follow through on some agreement that had been reached between, say, the US and China, confirm that to a high degree of certainty, there's no backdooring, there's no secret loyalties or anything like that,
then they could potentially like basically give that hegemon like military power that would then enforce over them and everyone this like agreement indefinitely.
I guess that could be bad depending on what the agreement is, but at least it would potentially like prevent destructive competition indefinitely.
What do you think of developing that sort of technology?
Yeah, elaborate on that.
How could it backfire?