Dwarkesh Patel
๐ค SpeakerAppearances Over Time
Podcast Appearances
And the reason that labs are orders of magnitude off this figure right now is that the models are nowhere near as capable as human knowledge workers.
Now, you might be like, look, how can the standard have suddenly become labs have to earn tens of trillions of dollars of revenue a year, right?
Like, until recently, people were saying, can these models reason?
Do these models have common sense?
Are they just doing pattern recognition?
And obviously, AI bulls are right to criticize AI bears for repeatedly moving these goalposts.
And this is very often fair.
It's easy to underestimate the progress that AI has made over the last decade.
But some amount of GoPro shifting is actually justified.
If you showed me Gemini 3 in 2020, I would have been certain that it could automate half of knowledge work.
And so we keep solving what we thought were the sufficient bottlenecks to AGI.
We have models that have general understanding, they have few-shot learning, they have reasoning, and yet we still don't have AGI.
So what is a rational response to observing this?
I think it's totally reasonable to look at this and say, oh, actually, there's much more to intelligence and labor than I previously realized.
And while we're really close and in many ways have surpassed what I would have previously defined as AGI in the past, the fact that model companies are not making the trillions of dollars in revenue that would be implied by AGI clearly reveals that my previous definition of AGI was too narrow.
And I expect this to keep happening into the future.
I expect that by 2030, the labs will have made significant progress on my hobby horse of continual learning, and the models will be earning hundreds of billions of dollars in revenue a year.
But they won't have automated all knowledge work.
And I'll be like, look, we made a lot of progress, but we haven't hit AGI yet.
We also need these other capabilities.