Tim Davis
๐ค SpeakerAppearances Over Time
Podcast Appearances
So today the open infrastructure we have is the framework, so Max and Mojo, which is the programming language.
You can just go to our website today, download that, start playing with it.
You can write operations for custom silicon.
You can write your own machine learning models and scale them on robots if you want.
All of that is available today.
We do work increasingly.
A lot of enterprise customers that come to us are still of the belief that we want data center-specific AI at massive scale.
And so, of course, they are looking for
uh, for, for cost gains and performance wins.
And so, you know, one of the, uh, the key customer, you know, I mean, we've, we're working with a lot of customers now, but, but one of the, you know, the, the sort of testimonial story is we worked with a, uh, you know, an awesome company called InWorld.
They're like an advanced, um, originally gaming, but now doing like an AI runtime.
Um, so it sort of metricizes and, and, and can do runtime environment for you in your, in your applications.
Yeah.
And they came to us and said, hey, look, you know, very advanced team.
They're all X deep mind.
And, you know, they said, look, we're using all this open infrastructure of LLM and other things, but we need, like, here's our use case.
And again, to reiterate, this is how people approach, you know, what they're trying to solve.
We have a text-to-speech model, and we want to get the first two-second audio chunk back to a user in under 200 milliseconds.
is what we want to be able to achieve.
Yeah, we know exactly what they're trying to solve for.