Keri Briske
👤 PersonAppearances Over Time
Podcast Appearances
And so we actually kept the Ultra within a box.
So it does take eight GPUs to run inference on or train or just have it within the system.
But we did that very specifically.
I can't say in the future that we'll stick to those rules for Ultra.
We might expand, but...
That was our thinking between Nano, Super, and Ultra.
Yeah, we did a survey on what were most of the enterprises using, what are their cloud instances, what's available to them.
And a lot of people were using Ampere A10s or A10Gs or A100s.
And so we really wanted to do that.
And then, of course, we work with our GeForce team and understanding what GPUs are going into laptops and what do we need to do to make sure that we have a really great, efficient, small model.
Yeah, and we do have a, I think we announced a little thing called Spark, which is a little development station.
So Nano's great for that, too.
It's going to be great.
It's so cute.
It's so cute.
It is.
No, I think that was a different one.
I think that was our data center rack, right?
During COVID, right?
Yes.