Yoshua Bengio
๐ค SpeakerVoice Profile Active
This person's voice can be automatically recognized across podcast episodes using AI voice matching.
Appearances Over Time
Podcast Appearances
Of course, data sets and architectures are something you want to always play with, but I think the crucial thing is more the training objectives, the training frameworks.
For example, going from passive observation of data to more
active agents which learn by intervening in the world, the relationships between causes and effects, the sort of objective functions which could be important to allow the highest level explanations to rise from the learning.
which I don't think we have now, the kinds of objective functions, which could be used to reward exploration, the right kind of exploration.
So these kinds of questions are neither in the data set, nor in the architecture, but more in how we learn under what objectives and so on.
Sort of almost guiding some aspect of learning.
So I was talking to Rebecca Sachs just an hour ago,
And she was talking about lots and lots of evidence from infants seem to clearly pick what interests them in a directed way.
And so they're not passive learners.
They focus their attention on aspects of the world which are most interesting, surprising in a non-trivial way that makes them change
their theories of the world.
In the sense of abstraction.
They're not getting some.
I don't think that having more depth in the network in the sense of instead of 100 layers, we have 10,000 is going to solve our problem.
What is clear to me is that engineers and companies and labs and grad students will continue to tune architectures and explore all kinds of tweaks to make the current state of the art ever slightly better.