Joe Carlsmith
๐ค SpeakerAppearances Over Time
Podcast Appearances
So, I mean, I think one, one thing you could think, which doesn't necessarily need to be about gray goo.
It could also just be about alignment is something like,
Sure, it would be nice if the AIs didn't violently disempower humans.
It would be nice if the AIs otherwise, when we created them, kind of their integration into our society led to good places.
But I'm uncomfortable with like the sorts of interventions that people are contemplating in order to ensure that sort of outcome, right?
And I think there's a bunch of things to be uncomfortable about that.
Now that said, so for something like
everyone being killed or violently disempowered, that is traditionally something that we think, if it's real, and obviously we need to talk about whether it's real, but in the case where it's a real threat, we often think that quite intense forms of intervention are warranted to prevent that sort of thing from happening, right?
So if there was actually a terrorist group that was planning to, you know, it was like working on a bioweapon that was gonna kill everyone, or 99.9% of people,
we would think that warrants intervention.
You just shut that down, right?
And now even if you had a group that was doing that unintentionally, imposing a similar level of risk, that's not... I think many, many people, if that's the real scenario, will think that that warrants kind of quite intense preventative efforts, right?
And so, obviously...
people, you know, these sorts of risks can be used as an excuse to expand state power.
Um, like there's a lot of things to be worried about for different types of like contemplated interventions to address certain types of risks.
Um, you know, I think we need to just, I think there's no like Royal road there.
You need to just like have the actual good epistemology.
You need to actually know, is this a real risk?
What are the actual stakes?
Um, and you know, look at it case by case and be like, is this, you know, is this warranted?