Keri Briske
👤 PersonAppearances Over Time
Podcast Appearances
I think people started to say open weights because people took apart the fact that things weren't open source.
Yeah.
Yeah, well, I think the sky's the limit.
We use ourselves internally.
We have things like deep researchers.
You have your own deep researchers if you've ever gone out to Google or Perplexity.
And you can imagine that we have data that we do not want to upload into an API internally.
And so we have our own deep researchers.
We actually put out that blueprint for others to build their own deep researchers locally and for themselves.
you can specialize them.
We have a lot of customers who are specializing for their domain, so they're able to take their proprietary data, their IP, their personal data, and be able to specialize it for their use case and their domain, because nobody knows your domain better than you, and you do not want to give away your intelligence, right?
So I think that
So that's things you can do with the model.
I think what's interesting with some of our recipes is that we put out the recipes and some people have taken our models and distilled them on their own.
And when I say distillation, that means taking a larger model and quantizing it or reducing the precision of the weights so it's smaller and faster.
So we've seen a lot of people take it and pick up
our algorithms to do neural architecture search to kind of even change the architecture of the model.
So if you're a real techie, you can get into the weeds and really do your own thing, like really change the guts of the model with the tools that we've given you.
Yeah, I think what's interesting about the datasets is that, well, there's two things.
We release the datasets that we've either created or acquired as much as we can.