Joe Carlsmith
๐ค SpeakerAppearances Over Time
Podcast Appearances
Or like, I don't know.
I mean, obviously you need to think about trade-offs and there's like a lot of people in principle you could be nice to, but I think like the principle of like be nice when it's cheap, I'm like very excited to try to uphold.
I also really hope that kind of other people uphold that with respect to me, including the AIs, right?
Like I think we should be kind of golden ruling.
Like we're thinking about, oh, we're going to inventing these AIs, right?
I think there's some way in which I'm trying to like kind of embody attitudes towards them that I like hope that they would embody towards me.
And that's like some, it's unclear exactly what the ground of that is, but that's something, you know, I really like the golden rule.
And I think a lot about that as a kind of basis for treatment of other beings.
And so I think like be nice when it's cheap is like a,
if you think about it, if everyone implements that rule, um, then we get potentially like a, a big kind of Pareto improvement or like, so, um, I don't know exactly Pareto improvement, but it's like good deal.
It's a lot, a lot of, a lot of good deals.
Um, and yeah, so I think it's that I'm just into pluralism.
I've got uncertainty, you know, there's like all sorts of stuff, uh, swimming around there.
But, um, and, and then I think also just as a matter of like
having kind of cooperative and kind of good balances of power and deals and kind of avoiding conflict.
I think like finding ways to, to set up structures that lots and lots of people in value systems and agents are happy with, um, including non-humans, you know, um, uh, the people in the past, AI is animals.
Like I really think we should be like, we should have very broad, uh,
broad sweep in thinking about what sorts of inclusivity we want to be kind of reflecting in a kind of mature civilization and kind of setting ourselves up for doing that.
I think we as a civilization are going to have to have a very serious conversation about what sort of kind of servitude
is appropriate or inappropriate in the context of AI development.