Lennart Heim
👤 PersonAppearances Over Time
Podcast Appearances
And this is what we see more and more is AI being deployed, AI being used.
And what I took, what I'm trying to put forward to cheese is just like, to some degree, like, guys, actually, it's more complex.
We cannot only look at model benchmarks and say, this is the country which is winning, this is the country which is losing, or we only have a three-month skip.
There's more to it.
In particular, if you think about like, how do export controls bite?
So what I'm putting forward there is just like, actually, you use compute for training.
They only use part of the compute for training.
They will probably build competitive models in China this year, particularly with test time computing on there.
But given the compute might be more important in the future, or you want to deploy it at scale to get your AI agents or whatever the future holds, this is eventually where I think export controls will hold.
And I think where the US at least has a big advantage.
How to then leverage this is another question.
Yeah, I think we use compute loosely here to refer to many different things.
But I think more broadly, what we mean with this is like processing units, integrated circuits, which do the computations, right?
And when we talk about AI, we talk about GPUs or AI chips.
It started in 2012, AlexNet used two GPUs.
Now we see Elon Musk building clusters with 200,000 of them.
I think he's currently right now talking about upgrading to a million GPUs.
And I think it's fair to say computers like the currency of AI, you know, when you're in Silicon Valley, people bolster with how much GPUs they have and how big their model is.
And again, it just became like this key ingredient for AI for training the systems, but eventually deploying it.
It's like the water as it is for the fish or the air as far as like this is what these systems need to be run and to be developed.