Dwarkesh Patel
👤 PersonAppearances Over Time
Podcast Appearances
I mean, obviously, the very basic analogy would just be fine-tuning on reflection bits, but I feel like in practice that probably wouldn't work that well.
So I don't know if you have some take on what the analogy of this thing is.
Just to make sure I understood, the reason that the collapse is relevant to synthetic data generation is because you want to be able to come up with synthetic problems or reflections which are not already in your data distribution?
I guess what I'm saying is...
You can't just keep scaling, quote-unquote, reflection on the same amount of prompt information and then get returns from that.
Have you seen this super interesting paper that dreaming is a way of preventing this kind of overfitting and collapse?
That the reason dreaming is an evolutionary adaptive is to put you in weird situations that are very unlike your day-to-day reality to prevent this kind of overfitting?
This is a very ill-formed thought, so I'll just put it out and let you react to it.
The best learners that we are aware of, which are children, are extremely bad at recollecting information.
In fact, at the very earliest stages of childhood, you will forget everything.
You're just an amnesiac about everything that happens before a certain year date.
But you're extremely good at picking up new languages and learning from the world.
And maybe there's some element of being able to see the forest for the trees.
Whereas if you compare it to the opposite end of the spectrum, you have...
LLM pre-training, which these models were literally able to regurgitate word for word what is the next thing in a Wikipedia page.
But their ability to learn abstract concepts really quickly the way a child can is much more limited.
And then adults are somewhere in between where they don't have the flexibility of childhood learning, but they can, you know, adults can memorize facts and information in a way that is harder for kids.
And I don't know if there's something interesting about that
And this is also relevant to preventing model collapse.
Let me think.