Jensen Huang
👤 PersonAppearances Over Time
Podcast Appearances
I just prelude it to maybe AI data processing processors, because guess what?
You need long-term memory.
You need short-term memory.
The KV cache processing is really intense.
AI memory is a big deal.
You kind of like your AI to have good memory.
And just dealing with all the KV caching around the system, really complicated stuff.
Maybe it wants to have a specialized processor.
Maybe there's other things, right?
So you see that NVIDIA's are
Our viewpoint is now not GPU.
Our viewpoint is looking at the entire AI infrastructure and what does it take for these incredible companies to get all of their workload through it, which is diverse and changing.
Look at the transformer.
The transformer architecture is changing incredibly, if not for the fact that CUDA is easy to operate on and iterate on,
How do they try all of their vast number of experiments to decide which of the transformer versions, what kind of attention algorithm to use?
How do you disaggregate?
CUDA helps you do all that because it's so programmable.
And so the way to think about our business now is you look at when all of these ASIC companies or ASIC projects start,
three, four, five years ago, I got to tell you, that industry was super adorable and simple.
There was a GPU involved.