Andrea Miotti
๐ค SpeakerAppearances Over Time
Podcast Appearances
But I think with AI, the first step is just to put this clear
surgical red line on you can develop AI systems that are specialized and are focused on a specific narrow set of abilities, things like an AI for scientific discovery on proteins.
We'll have some risks, but it's fine.
But just putting a clear normative
on no superintelligence defined as AI that could replace and outcompete humans and that can pose these major national security risks.
In some ways, it's the same that we have with our technologies.
We don't just let companies build nuclear bombs.
We have rules about that.
We can let them build civilian power plants, but there is some scrutiny.
Nobody wants a private company to build a nuclear bomb.
Nobody allows a private company to build a chemical weapon.
Of course, this requires some regulation.
It's not zero, but I think it's a well worth trade-off given the level of risks and given that these risks are acknowledged by the makers of this technology themselves.
They're not saying everything is going to be fine.
Some of them, the CEO of Anthropic says there's a 25% chance of essentially human extinction.
Sam Altman, CEO of OpenAI, the makers of CharGPT, says, superhuman machine intelligence is the greatest threat to the continued existence of humanity.
Elon Musk has similar quotes with like 20% chance of annihilation.
They're being very open about the risks here.
And I think we should heed their warnings and put a clear line in the sand.
No superintelligence.