Tim Davis
๐ค SpeakerAppearances Over Time
Podcast Appearances
Or is it not?
Because we are spending an enormous amount of capital
on the belief that it is.
And there may be utility there.
I'm not suggesting that there is not utility in that investment.
But I do agree deeply with Jan LeCun and Richard Sutton and others that have made the case for, look, autoregressive LLMs is not actually the path to super intelligence.
We need a different form of innovation.
Like I was at Google when Gnome and Aiden and others created their transformer and intention is not enough and it changed the world.
But, you know, are we convinced that that's the final architecture and that, you know, that is reflective of what we see as intelligence?
You know, I personally am not convinced.
And I think it would be great to be able to see a different approach.
And so the sloshing of all the capital is highly, you know, is predicated on this future of just keep scaling the flops and keep building the data centers.
And maybe that's a viable path.
But I do think it's right to question it.
Yeah, search and learning are really the primary techniques of being able to scale the future of these models.
Sorry, Corey, you were about to say something.
Yeah.
And I think, you know, even if you look at humans, right, like I'm in no way a neurologist, but I do know that, you know, we don't execute the vast majority of our brain function when we operate, you know, in the real world.
And so, you know, again, this comes back a little bit to the idea of, you know,
interpretability and actually understanding why do models make certain decisions at certain points when different neurons are firing inside these model topologies.