Yoshua Bengio
๐ค SpeakerVoice Profile Active
This person's voice can be automatically recognized across podcast episodes using AI voice matching.
Appearances Over Time
Podcast Appearances
So it's going to take some time before you have total domination of a few corporations or a couple of countries if AI continues to become more and more powerful.
But we might see those signs already happening with...
concentration of wealth is a first step towards concentration of power.
If you're incredibly richer, then you can have incredibly more influence on politics, and then it becomes self-reinforcing.
A future that is less dangerous...
Less dangerous because we mitigate the risk of a few people basically holding on to superpower for the planet.
A future that is more appealing is one where the power is distributed, where no single person, no single company or small group of companies, no single country or small group of countries has too much power.
It has to be that in order to make some really important choices for the future of humanity when we start playing with very powerful AI.
It comes out of a reasonable consensus from people from around the planet and not just the rich countries, by the way.
Now, how do we get there?
I think that's a great question, but at least we should start putting forward where should we go in order to mitigate these political risks?
Yes, but we have to understand intelligence in a broad way.
For example, human superiority to other animals in large part is due to our ability to coordinate.
So as a big team, we can achieve something that no individual humans could against like a very strong animal.
But that also applies to AIs, right?
We already have many AIs and we're building multi-agent systems with multiple AIs collaborating.
So, yes, I agree.
Intelligence gives power.
And as we build technology that yields more and more power...
it becomes a risk that this power is misused for acquiring more power or is misused in destructive ways like terrorists or criminals, or it's used by the AI itself against us if we don't find a way to align them to our own objectives.