Andrea Miotti
๐ค SpeakerAppearances Over Time
Podcast Appearances
Are you building something that could kill everybody on the planet?
Well, we should stop you from doing that.
We should put rules and say, absolutely not.
Yeah.
I think also with other technologies, laws are not code.
So they leave a lot of leeway for governments and for the judicial process to take its way.
So even with nuclear bombs, like in the end, I think in the US Atomic Energy Act, the definition is pretty broad to give powers to the government.
to be able to intervene.
It doesn't exactly define every single physical reaction that needs to occur for it to be a nuclear bomb.
And it's the same with things like chemical weapons.
It's broadly defining the category of things that we don't want, and then giving power to the judicial branch, the executive branch, and the police to intervene if they see this.
But with superintelligence,
We see that the companies are converging on this plan of AI that can replace and out-compete people across the board, and AI that has a series of capabilities that are in themselves national security concerns.
So AI that is capable of hacking, AI that is very capable at manipulating people, and AI that is...
capable at automating AI R&D itself, which is a way for them to accelerate the process towards superintelligence.
So I think a combination of defining it as this AI that is vastly more competent than people, and it can outcompete people across the board, as well as these other capabilities that are on the road to superintelligence, what we could call precursor capabilities, a bit like we define precursor chemicals for things like fentanyl.
I think that's a pretty robust way to both track
the goal and track the intermediate progress that would tell us that we're getting closer and tells us that we need to draw a line.
And I think this is very enforceable in the same way that we enforce rules on chemical weapons, we enforce rules on nuclear weapons.