Rob, Luisa, and the 80000 Hours team
๐ค SpeakerAppearances Over Time
Podcast Appearances
We should be handing over to our AI children.
Can you defend that?
I mean, this, I think, does hinge on moral anti-realism, right?
Because if you thought that they were just like objective moral facts that are mind independent, it's more like science.
Then I think it is true to say that, you know, if we'd locked in our views on the natural sciences in 1000 AD, that just would have been worse and it would have been more wrong.
And people just would have been making errors all the time if they locked it in so they couldn't change their mind about that.
And so if you do think that, I mean, it could be that there are objective moral facts, but we're not getting any closer to them or we're not likely to ever find them out.
but you could have a view that they exist and we're getting closer to them, in which case I would disagree.
Well, let me give you a picture in which someone could be actively in favor of human disempowerment.
So let's say that you think, the thing that I really value is like, I don't know, either well-being or like satisfying preferences or something like that.
And I think that AIs in future will be able to have their preferences satisfied and that they will have preferences.
They will potentially have well-being as well.
But I think that most humans like disagree and they're going to try to basically use AIs for their own purposes and not be concerned about their preferences, not be concerned about their well-being.
That's the moral atrocity that I'm concerned might occur indefinitely and people would lock that in forever.
So in fact, I might be in favor of AIs basically taking over and seizing the reins.
So that possibility is precluded and they will have some control over resources and their preferences will get satisfied.
Does that make sense?
Yeah, I guess, well, another framing would be, well, there's lots of disagreement among humans.
Like, I have a particular set of ideas about how things ought to go, and other people have, you know, views that partially overlap but are different.
And then you just add in, like, future AIs or the AIs that will exist as they're kind of a different player, and it'll be like, well, which do I want to have inside my coalition?