Rob Wiblin
๐ค SpeakerAppearances Over Time
Podcast Appearances
And that worked great.
It made a big difference until it also started to hit diminishing returns.
So then they naturally moved on to throwing compute at inference scaling and reinforcement learning for reasoning, as I was just describing.
Each individual thing, as they scale it up 10, 100, 1,000, 10,000-fold, it always kind of returns Peter out.
But so long as there's always something else to move on to, then the big picture trend will remain one of steady improvement like we've seen before.
I'm sure their research teams are feverishly working to figure out other ways to efficiently convert computer chips into smarter and more useful and more capable AI models.
That's the entire job of their research.
Will they succeed again for, you know, the fourth or fifth time?
Or have they kind of run out of tricks for now where we might have a couple of years to wait?
That is the fundamental unknown that everyone both inside and outside the companies is more or less forced to repeatedly speculate about until they either do or they don't.
So those were some technical updates, new technical results.
But there are a lot of worries that people have always had that also became more salient in the second half of 2025.
Here are a couple that stand out to me.
First, there was an ever-growing gap over that time between what AI models seemed like they could do in demos and how much they were actually upending most workplaces and the world around us and our personal lives.
This peculiar situation was summed up by the podcaster Dworkesh Patel as AI models keep getting more impressive at the rate the short timelines people predict, but more useful at the rate that long timelines people predict.
And I think this growing gap basically made analysts, made people begin to just trust their own judgment about how much something that seemed really impressive on the screen when they were talking to it, how much that could actually make a difference and increase productivity in real world situations.
One reason for that failure to transfer to real applications could be the second point.
AIs just clearly do not learn in the same way that humans do.
Whatever our weaknesses, humans really do quickly get a lot better at stuff with just a couple of samples, basically.
And we also just build up and accumulate new knowledge and new skills over time.