Corey Knowles
๐ค SpeakerVoice Profile Active
This person's voice can be automatically recognized across podcast episodes using AI voice matching.
Appearances Over Time
Podcast Appearances
I will say they've come a long way.
I spend a lot of time working with those models.
And even ChadGBT's, Grok's, Google's, even Claude's, they were all fast models.
which is a thing that's been impressive because when it first started it was not that way when they first started releasing those models it was like and it was mind-blowing even though it was 10 seconds yeah and I think the challenge is like you get these awkward like you sort of go to speak over it and then it's
And NVIDIA got a deal with Intel in the middle of that.
Oh, yeah.
I said, you know, I even feel like something, the reason the jury is still out for me is because I feel like what we're watching right now is this certain amount of concentration in smaller models.
However, that's largely fed by this idea that we're going to scale this up bigger and then we're going to work to make it more efficient.
We're going to work to streamline it.
We're going to make it faster.
We're going to make it smaller.
And this idea that, you know, smaller models are getting better.
And there's a big focus right now, at least it seems like to me, on better data, cleaning data, training models on increasingly better data.
So I don't know, does scaling have to continue to continue being able to push these smaller models up the food chain as well?
I don't know, but it's sure fascinating and a lot of fun to talk about.
Yeah, and you're still at like 97% or something.
Yeah, there's a loss, but it's minimal.
Or our brains, for that matter, in a lot of ways.
Or we're still asking the same question five years from now.
What's the easiest way for someone to try out Modular without, A, committing to a massive infrastructure migration or something?