Nathan Lambert
👤 PersonAppearances Over Time
Podcast Appearances
Yeah. So there's two main techniques that they implemented that are probably the majority of their efficiency. And then there's a lot of implementation details that maybe we'll gloss over or get into later that sort of contribute to it. But those two main things are, one is they went to a mixture of experts model, which we'll define in a second.
Yeah. So there's two main techniques that they implemented that are probably the majority of their efficiency. And then there's a lot of implementation details that maybe we'll gloss over or get into later that sort of contribute to it. But those two main things are, one is they went to a mixture of experts model, which we'll define in a second.
Yeah. So there's two main techniques that they implemented that are probably the majority of their efficiency. And then there's a lot of implementation details that maybe we'll gloss over or get into later that sort of contribute to it. But those two main things are, one is they went to a mixture of experts model, which we'll define in a second.
And then the other thing is that they invented this new technique called MLA latent attention. Both of these are big deals. Mixture of experts is something that's been in the literature for a handful of years. And OpenAI with GPT-4 was the first one to productize a mixture of experts model.
And then the other thing is that they invented this new technique called MLA latent attention. Both of these are big deals. Mixture of experts is something that's been in the literature for a handful of years. And OpenAI with GPT-4 was the first one to productize a mixture of experts model.
And then the other thing is that they invented this new technique called MLA latent attention. Both of these are big deals. Mixture of experts is something that's been in the literature for a handful of years. And OpenAI with GPT-4 was the first one to productize a mixture of experts model.
And what this means is when you look at the common models around that most people have been able to interact with that are open, right? Think LAMA. LAMA is a dense model. i.e. every single parameter or neuron is activated as you're going through the model for every single token you generate, right? Now, with a mixture of experts model, you don't do that, right?
And what this means is when you look at the common models around that most people have been able to interact with that are open, right? Think LAMA. LAMA is a dense model. i.e. every single parameter or neuron is activated as you're going through the model for every single token you generate, right? Now, with a mixture of experts model, you don't do that, right?
And what this means is when you look at the common models around that most people have been able to interact with that are open, right? Think LAMA. LAMA is a dense model. i.e. every single parameter or neuron is activated as you're going through the model for every single token you generate, right? Now, with a mixture of experts model, you don't do that, right?
How does the human actually work, right? It's like, oh, well, my visual cortex is active when I'm thinking about, you know, vision tasks and like, you know, other things, right? My amygdala is when I'm scared, right? These different aspects of your brain are focused on different things. A mixture of experts model attempts to approximate this to some extent.
How does the human actually work, right? It's like, oh, well, my visual cortex is active when I'm thinking about, you know, vision tasks and like, you know, other things, right? My amygdala is when I'm scared, right? These different aspects of your brain are focused on different things. A mixture of experts model attempts to approximate this to some extent.
How does the human actually work, right? It's like, oh, well, my visual cortex is active when I'm thinking about, you know, vision tasks and like, you know, other things, right? My amygdala is when I'm scared, right? These different aspects of your brain are focused on different things. A mixture of experts model attempts to approximate this to some extent.
It's nowhere close to what a brain architecture is, but different portions of the model activate, right? You'll have a set number of experts in the model and a set number that are activated each time. And this dramatically reduces both your training and inference costs.
It's nowhere close to what a brain architecture is, but different portions of the model activate, right? You'll have a set number of experts in the model and a set number that are activated each time. And this dramatically reduces both your training and inference costs.
It's nowhere close to what a brain architecture is, but different portions of the model activate, right? You'll have a set number of experts in the model and a set number that are activated each time. And this dramatically reduces both your training and inference costs.
Because now if you think about the parameter count as the sort of total embedding space for all of this knowledge that you're compressing down during training, When you're embedding this data in, instead of having to activate every single parameter every single time you're training or running inference, now you can just activate a subset.
Because now if you think about the parameter count as the sort of total embedding space for all of this knowledge that you're compressing down during training, When you're embedding this data in, instead of having to activate every single parameter every single time you're training or running inference, now you can just activate a subset.
Because now if you think about the parameter count as the sort of total embedding space for all of this knowledge that you're compressing down during training, When you're embedding this data in, instead of having to activate every single parameter every single time you're training or running inference, now you can just activate a subset.
And the model will learn which expert to route to for different tasks. And so this is a humongous innovation in terms of, hey, I can continue to grow the total embedding space of parameters. And so DeepSeq's model is, you know, 600 something billion parameters, right? Relative to LAMA-405b, it's 405 billion parameters, right? Relative to LAMA-70b, it's 70 billion parameters, right?
And the model will learn which expert to route to for different tasks. And so this is a humongous innovation in terms of, hey, I can continue to grow the total embedding space of parameters. And so DeepSeq's model is, you know, 600 something billion parameters, right? Relative to LAMA-405b, it's 405 billion parameters, right? Relative to LAMA-70b, it's 70 billion parameters, right?