Joe Carlsmith
👤 PersonAppearances Over Time
Podcast Appearances
But part of what I'm trying to do in that essay is to say...
no, I think we can be naturalists and also be kind of decent humans that remain in touch with a kind of a rich set of norms that have to do with like, how do we relate to the possibility of kind of creating creatures, altering ourselves, et cetera.
But I do think his, yeah, it's like a relatively simple prediction.
It's kind of science masters nature, humans part of nature, science masters humans.
Yeah, I mean, I think an uncomfortable thing about the kind of conceptual setup at stake in these sort of like abstract discussions of like, okay, you have this agent, it fooms, which is this sort of
amorphous process of kind of going from a sort of seed agent to a like super intelligent version of itself, often imagined to kind of preserve its values along the way.
A bunch of questions we can raise about that.
But I think a kind of, many of the arguments that people will often talk about in the context of reasons to be scared of AI is like, oh, like value is very fragile as you like foom, you know,
kind of small differences in utility functions can kind of de-correlate very hard and kind of drive in quite different directions.
And like, oh, like agents have instrumental incentives to seek power.
And if it was arbitrarily easy to get power, then they would do it and stuff like that.
Like, these are very general arguments that seem to suggest that the kind of,
it's not just an AI thing, right?
It's like no surprise, right?
It's talking about like, take a thing, make it arbitrarily powerful such that it's like, you know, God emperor of the universe or something,
How scared are you of that?
Like clearly we should be equally scared of that.
Or I don't know, we should be really scared of that with humans too, right?
So, I mean, part of what I'm saying in that essay is that I think this is, in some sense, this is much more a story about balance of power.
Right.