Grant
👤 SpeakerAppearances Over Time
Podcast Appearances
uh you know small or that level gemma 3n for example yeah gemma 3n like really really small models that allegedly some of which you could even run on a phone i mean how you do that i think is still a little bit people are like yeah you can in theory but we don't know how um but there was like two two things that i wanted to follow up on that uh first is the the cloud point uh there was a great take i'm forgetting who it was but i'll credit them when in the comments uh
who was basically saying, like, we spent all this time putting all of all of our computing to the cloud, but then it sort of like defeats the purpose of the Internet being like decentralized.
Right.
And that like like now all of a sudden everything is like, you know, going off to like these, you know, couple of providers and it doesn't create that resiliency.
To your point, if like something gets hit, like does the whole eastern seaboard go down, you know, depending on where the data centers are.
We don't know.
So there is a very good reason to try and have as much localized compute power as possible.
And to my earlier point, the models are also getting smaller.
They're getting not only smaller, but better when they're smaller, which is a very exciting thing to see.
And I think that the more that we specialize into...
niche models, the more that we'll see smaller models get used.
I guess the counterpoint, though, is like, what if you do have a really niche use case that you think, you know, you don't need like incredible data, you know, gigawatt level data centers to do?
You can technically train your own model on like a workstation or a series of workstations.
Great question.
Yeah.
Right.
Because you only need it for that very specific case.
Quick tangent about that.
I wrote for a company once that was doing basically the equivalent of that, but for drones.
And so they have these automated drone swarms that basically can survey people.