Elon Musk
๐ค SpeakerAppearances Over Time
Podcast Appearances
They rely on light to transmit data.
It claims this OPU delivers up to 100X the performance and energy efficiency of current leading GPUs and accelerator cards.
Let's get to it.
Patrick Burren, you're a FOSS co-founder and CEO, and Michael Stewart, managing partner at M12, formerly known as Microsoft's Venture Fund.
Patrick, the devil's in the detail here.
The use of light to transmit data instead of electrons, and again, I'm not an engineer, but what is this OPU design that you're developing?
It's essentially designed to be a drop-in replacement for a GPU, but actually runs 50 to 100 times both faster and with 50 to 100 times higher raw energy efficiency in the inference use case, not for training.
That's right.
You're an engineer by trade, by background, but then you go to the dark side of venture capital.
Why back this project?
You're going to tell me that there's a gap, there's a problem being solved for here.
There are lots of inference solutions out there, different technological underpinning, but why this one?
There's Light Matter, IA Labs, Renovis, for example.
Yeah, absolutely.
I think the thing is, wherever you go, whether it's in space or it's on Earth, AI is fundamentally hardware limited, and the hardware is fundamentally power limited, right?
And the move to space is an attempt to try to solve the power consumption part of the problem.
But wherever you go, even in space, power is still going to be limited, and we're solving that problem at the fundamental physics level.
You know, this is an industry that is still dominated by the GPU, NVIDIA, right?
But with the specialist inference platforms, you know, the argument that NVIDIA makes is that we give five-year visibility on our product roadmap, and that's why each generation of silicon that comes out, you just drop it in.
How is this more than a lab experiment right now with Neurofos?