Andy Halliday
๐ค SpeakerAppearances Over Time
Podcast Appearances
the major companies or using the free version or paying $20 a month, roughly to get access to those models.
Whereas deep seek is less expensive as an open source model.
And it is used more,
Two to four times higher across the African continent than the other ones.
And a Chinese company, Huawei, which has a lot of the telephone infrastructure, mobile phone infrastructure in those developing countries.
is also partnered with DeepSeek to advance the use of DeepSeek in those countries.
Now, I'm weaving over to something about DeepSeek on the technical side.
There'll be a quiz on this afterward.
DeepSeek has just introduced a new technique in LLM inference that's advancing its capability in pure reasoning in a dramatic way.
So, you know, the innovations that the Chinese companies starved of the sort of the scaling compute capabilities available, if you can acquire the top end data center infrastructure like the NVIDIA Blackwell chips and so on, they've innovated around efficiencies that are along two different dimensions and,
I'll circle back to this, but one of those two dimensions is the use of sparsity.
Now, sparsity is the opposite of dense in the terminology of AI.
Dense means that you're using every layer of the network in each inference run.
That's a dense, deep neural network.
And sparsity means you're only activating certain portions of it.
So if you have a 100 billion parameter model, any one inference run is dynamically assessing which portions of that deep neural network, the LLM, which layers of those have to be activated.
And this has given rise to the primary architecture for LLMs today, which is called mixture of experts.
So the only experts that are activated in this context are the ones which are relevant to the query.
And that reduces the computational overhead and it makes for a more efficient and effective inference run and reduces the cost in both energy and compute time and allows for a larger context window to be executed.