Demis Hassabis
๐ค SpeakerAppearances Over Time
Podcast Appearances
I think that's a nonsense.
They're not PhD intelligences.
They have some capabilities that are PhD level, but they're not in general capable, and that's exactly what general intelligence should be, of performing across the board at the PhD level.
In fact, as we all know, interacting with today's chatbots, if you pose the question in a certain way, they can make simple mistakes with even high school maths
and simple counting.
So that shouldn't be possible for a true AGI system.
So I think that we are maybe, I would say, sort of five to ten years away from having an AGI system that's capable of doing those things
Another thing that's missing is continual learning, this ability to like online teach the system something new or adjust its behavior in some way.
And so a lot of these, I think, core capabilities are still missing and maybe scaling will get us there.
But I feel if I was to bet, I think there are probably one or two missing breakthroughs that are still required and will come over the next five or so years.
No, I mean, we're not seeing that internally, and we're still seeing a huge rate of progress.
But also, we're sort of looking at things more broadly.
You see with our Genie models and Veo models and recently Nanobanana.
It's bananas.
Yes, it's bananas.
Well, I think that's the future of a lot of these creative tools is you're just going to sort of vibe with it or just talk to them.
And it'll be consistent enough where, like with Nanobanana, what's amazing about it is that it's an image generator.
It's state-of-the-art and best in class.
But one of the things that makes it so great is its consistency.
It's able to instruct and follow what you want changed and keep everything else the same.