Grant Harvey
👤 PersonAppearances Over Time
Podcast Appearances
There's the latency of the human brain.
And then there's the latency of like, we're doing this over streaming.
So there's like this, the latency of like the streaming.
Yeah.
I want to kind of like zoom out a little bit here because we have seen all of these really incredible deals announced recently between OpenAI and like every chip maker that I know of.
They've got deals with NVIDIA.
They've got deals with AMD.
Yeah, like there's all of this stuff going on.
And NVIDIA is, from my understanding, a partner with you as well.
Is that correct?
Because my understanding is that NVIDIA's CUDA is kind of like the reason, like, yes, their chips are the best right now, but also CUDA is really what keeps people locked in to work.
I wonder, though, if it's not like, you know, because eventually you'll have to scale whatever the current architecture is in theory.
Or sorry, well, you'll have to scale whatever the architecture is that works if you're following the better the better lesson, which for people who forget what that is, is basically like if you have, you know, basically reinforcement learning scaled properly is like just all you need, essentially.
That's a very.
So five years from now, if modular succeeds at its mission, what do you think the AI infrastructure landscape looks like?
You know, with the caveat that we know that there's going to be this big kind of like shift where it's like either scaling works or we have to come up with something different.
So we've established that that's a paradigm.
True.
It could be unanswered, I guess, by then.
But what does modular look like in the future?