Dwarkesh Patel
๐ค SpeakerAppearances Over Time
Podcast Appearances
But the in context learning itself is not gradient descent in the same way that our lifetime intelligence as humans to be able to do things is conditioned by evolution.
But our actual learning during our lifetime is like happening through some other process.
I actually don't fully agree with that, but you should continue with that.
Actually, then I'm very curious to understand how that analogy breaks down.
So then it's worth thinking about, okay, if both of them are implementing gradient descent, sorry, if in-context learning and pre-training are both implementing something like gradient descent, why does it feel like in-context learning actually we're getting to this like continual learning, real intelligence-like thing, whereas you don't get the analogous feeling just from pre-training?
At least you could argue that.
And so if it's the same algorithm, what could be different?
Well, one way you can think about it is how much information
does the model store per information it receives from training?
And if you look at pre-training, if you look at Llama 3, for example, I think it's trained on 15 trillion tokens.
And if you look at a 70B model, that would be the equivalent of 0.07 bits per token in that it sees in pre-training in terms of the information in the weights of the model compared to the tokens it reads.
Whereas if you look at the KV cache and how it grows per additional token and in-context learning, it's like 320 kilobytes.
So that's a 35 million fold difference in how much information per token is assimilated by the model.
I wonder if that's relevant at all.
Stepping back, what is the part about human intelligence that we have most failed to replicate with these models?
This is maybe relevant to the question of thinking about how fast these issues will be solved.
So sometimes people will say about continual learning, look, actually, you could easily replicate this capability just as in-context learning emerged spontaneously as a result of pre-training.
Continual learning over longer horizons will emerge spontaneously if the model is incentivized to recollect information over longer horizons or horizons longer than one session.
So if there's some like outer loop RL, which...