Ajeya Cotra
๐ค SpeakerAppearances Over Time
Podcast Appearances
that doesn't want to help you.
But it's a lot easier to see how you potentially solve problems other than alignment.
Like if you assume, well, the alignment part, we feel like we've got a good handle on.
But there's a huge list of other problems that are being created during the intelligence explosion, like the fact that AI now, if people get access to it, could invent other kinds of destructive technologies that we don't yet have good countermeasures for.
In that case, it's just clear how well the AI could just help you figure out what the countermeasures ought to be.
That makes sense.
I think the distinction I was drawing is for people who thought that the alignment problem was extremely hard to solve and we were like way off track to solving it, the idea of getting the AI to solve the problem is kind of self-contradictory because like, well, I wouldn't believe, I wouldn't trust the AI at all.
Anything that it proposed, I would assume was sabotaging us.
If you're on the side of thinking, well, the alignment problem is actually the easier part of things, I think that that's a relatively straightforward technical problem that we are on track to solve.
But there's this laundry list of 10 other issues.
It's then very obvious, but we'll have the brilliant AGI, so why don't we just use that to solve all the other things?
And also, I'm inclined to trust it and believe it.
So which kind of specific problems arising from the intelligence explosion are you envisaging, wanting to get the AGI to help us out with?
How do you ensure that advances in AI doesn't lead to a war between the US and China, that kind of thing?
So I interviewed Will McCaskill and Tom Davidson from Forethought earlier in the year.
And the organization has a long list of what they call grand challenges, which they suspect all of them are probably amenable to this kind of AGI labor during crunch time.
I think other ones are like ensuring that society doesn't end up locked into particular values kind of prematurely and like cuts off our ability for further reflection and changing our mind.
The potential use of AI or AGI in as much as it's very steerable and follows instructions to be used in kind of power grabs by the people who are operating it.
I guess the space governance, this question of if we actually do start to be able to use resources in space, how would we share them?
How would we divide them such that, in particular, such that there's not conflict ahead of time because people anticipate that once you start grabbing resources in space, you're on track to become overwhelmingly dominant.