Rob, Luisa, and the 80000 Hours team
๐ค SpeakerAppearances Over Time
Podcast Appearances
You're working on, I think, what you call a gradual disempowerment index.
What is that and why do you think it will be useful?
Yeah, I guess an example of that would be in as much as humans are using AI tools to make better decisions, is that empowerment or is that disempowerment?
It's kind of a bit ambiguous which direction you're going in.
And I guess it could be empowerment to start with and then disempowerment later.
Yeah, the trouble, I think, in my mind is there's so many different ways that things could play out that it's very difficult to find any objective measure like GDP or, I don't know, life statistics or something that a statistical agency would collect.
They would say, well, this is definitely going to be going down if there's gradual disempowerment.
You've written that you think AI constitutions are very important and much more important than most people appreciate at the moment.
What is an AI constitution and why does it matter?
Yeah.
Do you think people are going to like this is going to be a moment when the penny will drop and people think like it's like way more important to have control over the, you know, I guess the post-training stage of AIs and it's going to be incredibly important for society as a whole.
What sort of system prompt is put into things like Claude and ChatGPT?
Does this also make you more enthusiastic about open source AI so people can put their own system prompt and do their own fine-tuning?
Because you could have competition between countries as well, potentially.
So you could have some countries that basically impose a system prompt on all the AIs that are being operated in their country.
But if you could access the Russian model or, I don't know, the German model where the government has taken less of an interest or alternatively they've given a different system prompt in order to mess with other countries.
I see.
Yeah.
Well, I mean, to some extent, I guess there's some things in the system prompt that the government might impose that are good.
I think I've done a previous interview where someone was saying, well, why don't we just demand lawful AI where, or at least like at a bare minimum, where like the system prompt says, if someone asks you to help with a crime or like don't break the law yourself, and if someone asks you to help them break the law, then don't help them.