Demis Hassabis
๐ค SpeakerAppearances Over Time
Podcast Appearances
Imagine that was the prompt.
That's pretty underspecified.
And so the current systems wouldn't know, I think, what to do with that, how to narrow that down to something tractable.
And I think there's similar, like, look, just make a better version of yourself.
That's too unconstrained.
But we've done it in, as you know, with Alpha Evolve, things like faster matrix multiplication.
So when you hone it down to a very specific thing you want, it's very good at incrementally improving that.
But at the moment, these are more like incremental improvements, sort of small iterations.
Whereas if you wanted a big leap in understanding, you'd need a much larger advance.
Yes.
If it was just incremental improvements, that's how it would look.
So the question is, could it come up with a new leap like the Transformers architecture?
Could it have done that back in 2017 when we did it and Brain did it?
And it's not clear that these systems, something like AlphaVol wouldn't be able to do, make such a big leap.
So for sure, these systems are good.
We have systems, I think, that can do incremental hill climbing.
Mm-hmm.
And that's a kind of bigger question about, is that all that's needed from here?
Or do we actually need one or two more big breakthroughs?
Yeah, I don't think anyone has systems that have shown unequivocally those big leaps.