Dwarkesh Patel
👤 PersonAppearances Over Time
Podcast Appearances
In lifetime.
So Quentin Pope had this interesting blog post where he's saying the reasoning doesn't expect a sharp takeoff is...
So humans had the sharp takeoff where 60,000 years ago, we seem to have had the cognitive architectures that we have today.
And 10,000 years ago, agricultural revolution, modernity, dot, dot, dot.
What was happening in that 50,000 years?
Well, you had to build this sort of like cultural scaffold where you can accumulate knowledge over generations.
This is an ability that exists for free in the way we do AI training today.
Where if you retrain a model, it can still, I mean, in many cases, they're literally distilled, but they can be trained on each other.
You know, they can be trained on the same pre-training corpus.
They don't literally have to start from scratch.
So there's a sense in which the thing which it took humans a long time to get this cultural loop going just comes for free with the way we do LLM training.
When would you expect that kind of thing to start happening?
And more general question about like multi-agent systems and a sort of like independent AI civilization and culture.
And can you identify the key bottleneck that's preventing this kind of collaboration between LLMs?
Maybe like the way I would put it is...
So you've talked about how you were at Tesla leading self-driving from 2017 to 2022.
And then you firsthand saw this progress from, we went from cool demos to now thousands of cars out there actually autonomously doing drives.
Why did that take a decade?
Like what was happening through that time?