Rene Haas
๐ค SpeakerAppearances Over Time
Podcast Appearances
Well, he builds boxes, right? He builds DGX boxes, and he builds all kinds of stuff.
Yeah, lock-in is a... It's either one can look at it as an offensive maneuver that you take and I'm going to do these things so I can lock people in and or you provide an environment in such that it's just so easy to use your hardware that by default you then quote locked in. Let's go back to the AI workload commentary.
Yeah, lock-in is a... It's either one can look at it as an offensive maneuver that you take and I'm going to do these things so I can lock people in and or you provide an environment in such that it's just so easy to use your hardware that by default you then quote locked in. Let's go back to the AI workload commentary.
Yeah, lock-in is a... It's either one can look at it as an offensive maneuver that you take and I'm going to do these things so I can lock people in and or you provide an environment in such that it's just so easy to use your hardware that by default you then quote locked in. Let's go back to the AI workload commentary.
So today, if you're doing general purpose compute, you're writing your algorithms in C or JAX or something of that nature. And now you're, let's say, wanting to write something in TensorFlow or Python. In an ideal world, what does the software developer want?
So today, if you're doing general purpose compute, you're writing your algorithms in C or JAX or something of that nature. And now you're, let's say, wanting to write something in TensorFlow or Python. In an ideal world, what does the software developer want?
So today, if you're doing general purpose compute, you're writing your algorithms in C or JAX or something of that nature. And now you're, let's say, wanting to write something in TensorFlow or Python. In an ideal world, what does the software developer want?
Software developer wants to be able to write their application at a very high level, whether that is a general purpose workload or an AI workload, and just have it work on the underlying hardware with not really having to know what the attributes are of the underlying hardware. I don't know if there's any software people in the room. Software people are wonderful.
Software developer wants to be able to write their application at a very high level, whether that is a general purpose workload or an AI workload, and just have it work on the underlying hardware with not really having to know what the attributes are of the underlying hardware. I don't know if there's any software people in the room. Software people are wonderful.
Software developer wants to be able to write their application at a very high level, whether that is a general purpose workload or an AI workload, and just have it work on the underlying hardware with not really having to know what the attributes are of the underlying hardware. I don't know if there's any software people in the room. Software people are wonderful.
They are inherently lazy and they want to be able to just have their application run and have it work. So as a computer architecture platform, it's incumbent upon us to make that easy. So when we think about providing a heterogeneous platform and homogeneous across the software, that is a very big initiative for us. And we're doing it today.
They are inherently lazy and they want to be able to just have their application run and have it work. So as a computer architecture platform, it's incumbent upon us to make that easy. So when we think about providing a heterogeneous platform and homogeneous across the software, that is a very big initiative for us. And we're doing it today.
They are inherently lazy and they want to be able to just have their application run and have it work. So as a computer architecture platform, it's incumbent upon us to make that easy. So when we think about providing a heterogeneous platform and homogeneous across the software, that is a very big initiative for us. And we're doing it today.
We have a technology called Clyde and these are Clyde libraries for AI. And they do that for the CPU. All the goodness that we put inside our CPU products that allows for acceleration by using the libraries, and we make those available open. There's no charge. Developers, it just works.
We have a technology called Clyde and these are Clyde libraries for AI. And they do that for the CPU. All the goodness that we put inside our CPU products that allows for acceleration by using the libraries, and we make those available open. There's no charge. Developers, it just works.
We have a technology called Clyde and these are Clyde libraries for AI. And they do that for the CPU. All the goodness that we put inside our CPU products that allows for acceleration by using the libraries, and we make those available open. There's no charge. Developers, it just works.
So going forward, since vast majority of the platforms today are ARM-based and the vast majority are going to run AI workloads, we just want to make that really easy for folks.
So going forward, since vast majority of the platforms today are ARM-based and the vast majority are going to run AI workloads, we just want to make that really easy for folks.
So going forward, since vast majority of the platforms today are ARM-based and the vast majority are going to run AI workloads, we just want to make that really easy for folks.
Yeah, so the current update is that it plans to go to trial on December 16, which isn't very far away. And I can appreciate, because we talk to investors and partners, that what do they hate the most is uncertainty. So that I can appreciate. But on the flip side, I would say the principles as to why we filed the claim are unchanged. And that's about all I can say.