James Manyika
๐ค SpeakerAppearances Over Time
Podcast Appearances
We haven't solved through how we do meta-learning, transfer learning.
So there's a whole bunch of things that we haven't quite solved.
Now, we're making progress on some of those things.
I mean, some of the things that have happened with these large-language universal models is really breathtaking, right?
But I think that, in my view at least, the collection of things that we have to solve before we get to AGI
There's too many that still feel unsolved to me.
Now, we could have somebody break through in a day.
That's why I'm not ready to give a prediction in terms of timeline.
But these seem like really hard problems to me.
And many of my friends who are working on some of these issues also seem to think these are hard problems, although there are some of them who think that
we're almost there, that all we need to, you know, deep learning will get us to most places we need to get to, and reinforcement learning will get us most of what we need.
So those are my friends who think that, think that it's more imminent.
Yeah, some of them say a decade or two.
So there's a lot of real debate about this.
In fact, you may have seen one of the things that I participated in a couple of years ago was Martin Ford put together a book that was a collection of interviews with a bunch of people.
This is his book, Architects of Intelligence.
And he had a wonderful range of people in that book.
I was fortunate enough to be included, but there were many more people who were way more interesting than me, people like Demis Hassabis and Yoshua Bengio and a whole bunch of people.
It was a really terrific collection.
And one of the things that he asked that group who are in that book was to ask them to give a view