Sam Altman
๐ค SpeakerAppearances Over Time
Podcast Appearances
Maybe.
We take our time on releases of major games.
We take our time on releases of major games.
New models, and I don't think we I think it will be great When we do it, and I think we'll be thoughtful about how we do it Like we may release it in a different way than we've released previous models Also, I don't even know if we'll call it GPT-5 What I what I will say is you know a lot of people have noticed how much better GPT-4 has gotten Since we've released it and particularly over the last few months.
New models, and I don't think we I think it will be great When we do it, and I think we'll be thoughtful about how we do it Like we may release it in a different way than we've released previous models Also, I don't even know if we'll call it GPT-5 What I what I will say is you know a lot of people have noticed how much better GPT-4 has gotten Since we've released it and particularly over the last few months.
I think I I think that's a better hint of what the world looks like, where it's not the one, two, three, four, five, six, seven, but you use an AI system and the whole system just gets better and better fairly continuously. I think that's both a better technological direction, I think that's easier for society to adapt to. But I assume that's where we'll head.
I think I I think that's a better hint of what the world looks like, where it's not the one, two, three, four, five, six, seven, but you use an AI system and the whole system just gets better and better fairly continuously. I think that's both a better technological direction, I think that's easier for society to adapt to. But I assume that's where we'll head.
Well, I mean, one thing that you could imagine is just that you keep training a model. That would seem like a reasonable thing to me.
Well, I mean, one thing that you could imagine is just that you keep training a model. That would seem like a reasonable thing to me.
GPT-4 is still only available to the paid users, but one of the things that we really want to do is figure out how to make more advanced technology available to free users too. I think that's a super important part of our mission.
GPT-4 is still only available to the paid users, but one of the things that we really want to do is figure out how to make more advanced technology available to free users too. I think that's a super important part of our mission.
And this idea that we build AI tools and make them super widely available, free or not that expensive, whatever it is, so that people can use them to go kind of invent the future rather than the magic AGI in the sky inventing the future and showing it down upon us. That seems like a much better path. It seems like a more inspiring path. I also think it's where things are actually heading.
And this idea that we build AI tools and make them super widely available, free or not that expensive, whatever it is, so that people can use them to go kind of invent the future rather than the magic AGI in the sky inventing the future and showing it down upon us. That seems like a much better path. It seems like a more inspiring path. I also think it's where things are actually heading.
It makes me sad that we have not figured out how to make GPT-4 level technology available to free users. It's something we really want to do.
It makes me sad that we have not figured out how to make GPT-4 level technology available to free users. It's something we really want to do.
It's very expensive.
It's very expensive.
So on the first part of your question, speed and cost, those are hugely important to us. And I don't want to give a timeline on when we can bring them down a lot because research is hard, but I am confident we'll be able to. We want to cut the latency super dramatically. We want to cut the cost really, really dramatically. And I believe that will happen.
So on the first part of your question, speed and cost, those are hugely important to us. And I don't want to give a timeline on when we can bring them down a lot because research is hard, but I am confident we'll be able to. We want to cut the latency super dramatically. We want to cut the cost really, really dramatically. And I believe that will happen.
We're still so early in the development of the science and understanding how this works. Plus, we have all the engineering tailwinds. So I don't know when we get to intelligence too cheap to meter and so fast that it feels instantaneous to us and everything else, but... I do believe we can get there for a pretty high level of intelligence. It's important to us.