Azeem Azhar
๐ค SpeakerAppearances Over Time
Podcast Appearances
And the family that we looked at was only really the preeminent family for a few months.
Now, we know that
Enterprises don't change the API they're using the day a new one comes out.
There's always a bit of a lag.
But consumers do, right?
Because that's what you get access to on ChatGPT.
And you may remember that when GPT-4 was set aside from ChatGPT, it was an emotional support tool for many users.
And they were very upset with how methodical and mechanical GPT-5 now felt.
And I think one of the uncertainties is,
To what extent do you actually learn and prepare for your next model based on the short life of the existing model, right?
There are a couple of elements to it, right?
One, I think, is a little bit more nebulous, which is that by having a really good model, even if it lasts for a short period of time, you maintain your forward momentum in the market in terms of customers liking you and your enterprise sales and so on.
And that feels less tangible than the second bit that I think is perhaps a bit harder to unpick, which is what do you learn?
about running better and better models from actually having run a better model, even if it only lasts for four months.
And that learning might be sort of down in the weeds in sort of R&D and particular choices you make in training data and reinforcement learning.
It might also be in operations, right, in just operating a model of that scale.
And I think it's quite hard for us to know.
I suspect it's hard for OpenAI or any of the other foundation models to know the contribution of that second part to the model itself, right?
So in a sense, who in this kingdom is actually able to see with two eyes?
I'm not sure, you know, many can at this point.