Sebastian Siemiatkowski
๐ค SpeakerAppearances Over Time
Podcast Appearances
And I had to give Claude some artistic freedom.
I'm very interested in this.
Like, I got this question.
I was on a conference in Yellowstone, and I was on the stage, crazy enough, with Sam Altman and Eric Schmidt.
And from the audience comes this question, how is it possible that you can take the whole of ChachiBT 5 as an example, one of these models, once they've been trained, once the training is over and the whole thing is, like, done, and fit it on a USB stick?
How is that possible?
That it's not bigger in size.
It's just like a few hundred gigabytes or whatever the size of the models are.
And then they gave like different answers.
And I had an answer in my head, but I felt embarrassed in that setting to say, and I think what people underestimate with AI, it's a compression technology.
So what that means is if you historically put data in a database, you say a database record, okay, Klana has a customer called Sephora.
And then we write again, Klana has a customer in Sephora.
You created this tremendous amount of duplication.
If you look at any large enterprise company, they will have the same information over and over and over again.
But if you look at Wikipedia,
How many articles is there about Klana?
One.
Why aren't there 15?
What do they do so magically?
How can it be that Klana historically had a customer relationship with Sephora, and we had information about that customer relationship in Slack, in Salesforce, in Google Docs, in Google Slack?