David Duvenaud
๐ค SpeakerAppearances Over Time
Podcast Appearances
Well, and I think it's a really weird corner case to imagine this world where like we die, but then our desires are ultimately fulfilled.
Like that just seems like, yes, in principle it could happen, but it would be like this weird corner case because probably if we die, something else has gone horribly wrong.
Right.
I mean, I think that just puts a cap on sort of like what's the best we can hope for in terms of satisfying everyone's preferences, but that we were already sort of in that situation.
So I guess I'll say, given that we already have to have some sort of like compromise or not everyone's going to get what they want, we should at least work to make it so that at least some people or some compromise of humans gets what it wants as opposed to just competition gets what it wants.
Sure.
Well, at least first I'll say the most common arguments I hear in favor of this.
So one is...
pretty soon we're going to have these amazing AIs.
So they're going to handle this for us.
And we don't really need to worry about these kind of coordination problems.
And I guess I feel like, yes, if there was a big jump in capabilities and everybody got it on the same day and everybody asked their AI, what should I do for the good of humanity and did that, then that would be a recipe for really good outcomes.
But I don't think that's what was going to happen.
We're going to continue to see people gradually getting more powerful AIs and those gradually getting spread out according to roughly to power level.
And people continuing to optimize mostly for their own interests just due to competitive pressures.
So yeah, my fear is that the business as usual doesn't give us such jumping capabilities that we're suddenly able to coordinate in a way we weren't before.
The other common argument is, well, if people a thousand years ago got their way, we wouldn't have made all the moral progress that we've made since then.
And so it would have been a huge mistake from our point of view today to have let them lock in.
So by induction, it will be a huge mistake from the point of view of future beings to have let us lock in.
And I think that's kind of like a moral optical illusion in that, yes, if you measure sort of