Rob Wiblin
๐ค SpeakerAppearances Over Time
Podcast Appearances
And because AIs just don't at all do that, that means that unlike with a new human hire, they don't get way more capable in the first few months working in a job as they figure out what works and what doesn't.
They kind of just plateau really quickly.
Some people think that cracking this so-called continual learning is the key to unlocking way more usefulness from AI.
But notably, we kind of didn't see much visible progress on that in 2025.
So for some people, that was also a bearish indicator.
Finally, let's talk about automation of AI research and development in particular.
That looms especially large in this conversation, because if you can fully automate that, then maybe you would set off a recursive self-improvement loop where AIs, they design better AIs, they go on to design better AIs, and pretty quickly humans kind of fade into the background and are progressively barely involved in AI development.
In my opinion, and I think most people's opinions, it would be a really big deal if you could pull off fully automated AI research and development.
But as people swung towards pessimism, a few arguments resurfaced and got a lot more discussion as to why AI getting better at software engineering in these task timelines or studies, why that may not lead to fully automated AI research and development, particularly soon at all.
First, AI companies, they're mostly not made of software engineers.
So there's a lot of other stuff that's going on.
In particular, in the case that we're thinking of, other aspects of thinking about and experimenting on artificial intelligence that just are not software engineering.
So even if software engineering, the thing that we're testing in these benchmarks, became effectively free and unlimited and instant to do, arguably, the whole process of AI research and development would pretty quickly get bottlenecked somewhere else and might only have been sped up quite modestly.
Secondly, as we improve AI, it gets harder and harder to find further improvements because like with all similar things, we've plucked the low hanging fruit.
And that means that potentially we actually need a lot of AI assistance and tooling just to keep up the same rate of progress we had in the past.
So if you observe that an AI company is now starting, almost all of its code is being written by AI, 90%, 95%.
Well, maybe all that's doing is allowing them to stay at the same research speed that they had before, not get faster as you might kind of naively expect when you first hear that.
This last point can be pretty material.
It can pack a punch.
But the people behind the AI 2027 scenario, they missed this factor in their first round of modeling a year ago.