Joe Carlsmith
👤 PersonAppearances Over Time
Podcast Appearances
And I think that's scary, you know, that's a quite scary scenario, partly because of the speed and people not having time to react.
And then there's sort of intermediate scenarios where like some things got automated, maybe like people really handed the military over to the AIs or like, you know, automated science.
There's like some rollouts and that's sort of giving the AIs power that they don't have to take, or we're doing all our cybersecurity with AIs and stuff like that.
And then there's worlds where you like really, you know, you sort of fully, you more fully transitioned to a kind of world run by AIs and,
on, you know, kind of in some sense, humans voluntarily did that.
All right, back to Joe.
Maybe there were competitive pressures, but you kind of intentionally handed off like huge portions of your civilization.
And, you know, at that point, you know, I think it's likely that humans have like a hard time understanding what's going on.
Like a lot of stuff is happening very fast and it's, you know, the police are automated, you know, the courts are automated.
There's like all sorts of stuff.
Now, I think I take, I tend to think a little less about those scenarios because I think those are correlated with, I think it's just like longer down the line.
Like I think humans are not
hopefully going to just, like, oh, yeah, like, you built an AI system?
Like, let's just... You know, I think human... And in practice, when we look at, like, technological adoption rates, I mean, it does... It can go quite slow, and obviously there's going to be competitive pressures, but in general, I think, like, this category is somewhat safer.
But even in this one, I think it's, like...
I don't know, it's kind of intense.
Like if you really, if humans have really lost their epistemic grip on the world, if they've sort of handed off the world to these systems, even if you're like, oh, there's laws, there's norms.
You know, I really want us to like,
to have a really developed understanding of what's likely to happen in that circumstance before we go for it.
I think you're right in picking up on this assumption in the AI risk discourse of what we might call like kind of