Andrea Miotti
๐ค SpeakerAppearances Over Time
Podcast Appearances
And I think most people say an overwhelming no to having superintelligence
replace them across the board.
And I think governments still have time to act and say, a lot of AI development is fine.
It can be great for economic competitiveness.
It can be great even for military uses.
But we should say a hard no to superintelligence.
Prohibiting the development of superintelligence, no AI that can escape human control, it can endanger national and global security.
And I think if we make that choice, we're going to stay in control.
We will face a lot of disruption, but we're going to face a good future with AI.
Yeah, absolutely.
So I think this is why it's important that these rules are targeted and really surgical.
It's not about regulating the entirety of AI development.
That would be quite complicated.
I think it can be done.
A lot of things are regulated, and some of them are, frankly, over-regulated, and they're much more mundane than AI.
All drugs that we consume are regulated.
A lot of food is regulated.
Cars are regulated.
I could go on and on.
So it's totally possible.