Yoshua Bengio
๐ค SpeakerVoice Profile Active
This person's voice can be automatically recognized across podcast episodes using AI voice matching.
Appearances Over Time
Podcast Appearances
So to improve future versions of itself, the AI is able to copy itself on other computers, eventually not depend on us in some ways, or at least on the engineers who have built those systems.
So this is to try to track the capabilities that could give rise to a rogue AI eventually.
I'm often asked whether I'm optimistic or pessimistic about the future with AI.
And my answer is, it doesn't really matter if I'm optimistic or pessimistic.
What really matters is what I can do, what every one of us can do in order to mitigate the risks.
And it's not like each of us individually is going to solve the problem, but each of us can do a little bit to shift the needle towards a better world.
And for me, it is two things.
It is...
raising awareness about the risks, and it is developing the technical solutions to build AI that will not harm people.
That's what I'm doing with Law Zero.
For you, Steven, it's having me today discuss this so that more people can understand a bit more the risks.
and that's going to steer us into a better direction.
For most citizens, it is getting better informed about what is happening with AI beyond the optimistic picture of it's going to be great.
We're also playing with unknown unknowns of a huge magnitude.
So we...
We have to ask this question, and I'm asking it for AI risks, but really it's a principle we could apply in many other areas.
We didn't spend much time on my trajectory.
I'd like to say a few more words about that, if that's okay with you.
So we talked about the early years in the 80s and 90s.
In the 2000s is the period where Jeffington, Yann LeCun and I and others realized that we could train these neural networks to be much, much, much better than other existing methods that researchers were playing with.