Azeem Azhar
๐ค SpeakerAppearances Over Time
Podcast Appearances
Large language models pushing for scale are doing kind of amazing and interesting things, but they are pretty lumbering.
And there are other approaches.
You know, there are alternative architectures that people are using today.
For example, Liquid AI, which is an MIT spin-out, which has
Models that are about 10 times as efficient as transformers.
You've got Yann LeCun at Meta talking about his framework, that what is needed to move beyond these autoregressive, exponentially decaying large language models are frameworks that have a much better, stronger sense of the world.
Demis Hassabis is a Nobel laureate, runs DeepMind, built some of the greatest AI technologies of the moment.
What we'll say, look,
We don't know whether there aren't going to be really big breakthroughs required to get to the next level of AI capability.
My own view is you probably would.
You generally have always needed scientific breakthroughs to improve things, even if it's a car engine or the way in which you build a bridge.
But Demis, who knows much more about this than me, will likely say, without putting words in his mouth, we don't know.
We don't know how far this will go.
We don't know whether it will necessarily take us there.
So all of those things, I think, are frictions on OpenAI.
They add to the bear case.
They talk about really, really deep scientific and technical risk.
They talk about some market risk.
They talk about general execution risk through which OpenAI needs to sail.
And of course, to the question that's just come in about the recent MIT paper, which, you know, pours cold water on the bull hypothesis.