Sam Altman
👤 PersonAppearances Over Time
Podcast Appearances
We lose money on it. It's not ad supported or anything. We just want to put AI in people's hands. We continue to want to deploy this technology so that people co-evolve with it, understand it, that the world is going through this process it's going through right now of contending with AI and eventually AGI and thinking how it's going to go.
We lose money on it. It's not ad supported or anything. We just want to put AI in people's hands. We continue to want to deploy this technology so that people co-evolve with it, understand it, that the world is going through this process it's going through right now of contending with AI and eventually AGI and thinking how it's going to go.
We lose money on it. It's not ad supported or anything. We just want to put AI in people's hands. We continue to want to deploy this technology so that people co-evolve with it, understand it, that the world is going through this process it's going through right now of contending with AI and eventually AGI and thinking how it's going to go.
And everything we're doing, I believe Elon would be happy about if he weren't in control of the company. He left when he thought we were on a trajectory to certainly fail and also wouldn't do something where he had total control over OpenAI.
And everything we're doing, I believe Elon would be happy about if he weren't in control of the company. He left when he thought we were on a trajectory to certainly fail and also wouldn't do something where he had total control over OpenAI.
And everything we're doing, I believe Elon would be happy about if he weren't in control of the company. He left when he thought we were on a trajectory to certainly fail and also wouldn't do something where he had total control over OpenAI.
But I think it's a little bit of a sideshow, and the right thing for us to do is just keep doing incredible research, keep shipping products people love, and most importantly, keep pursuing this mission of AGI to benefit people and getting that out into the world.
But I think it's a little bit of a sideshow, and the right thing for us to do is just keep doing incredible research, keep shipping products people love, and most importantly, keep pursuing this mission of AGI to benefit people and getting that out into the world.
But I think it's a little bit of a sideshow, and the right thing for us to do is just keep doing incredible research, keep shipping products people love, and most importantly, keep pursuing this mission of AGI to benefit people and getting that out into the world.
When we started OpenAI, we thought... It's hard to go back and remember how different things were in 2015. That was before language models and chatbots. It was way before ChatGPT. We were doing research and publishing papers and working on AIs that could play video games and control robotic hands and things like that. And we were supposed to get a billion dollars, but ended up not.
When we started OpenAI, we thought... It's hard to go back and remember how different things were in 2015. That was before language models and chatbots. It was way before ChatGPT. We were doing research and publishing papers and working on AIs that could play video games and control robotic hands and things like that. And we were supposed to get a billion dollars, but ended up not.
When we started OpenAI, we thought... It's hard to go back and remember how different things were in 2015. That was before language models and chatbots. It was way before ChatGPT. We were doing research and publishing papers and working on AIs that could play video games and control robotic hands and things like that. And we were supposed to get a billion dollars, but ended up not.
We thought with a billion dollars, we could make substantial progress towards what we were trying to do. As we learned more and got into the scaling language model world, We realized that it was not going to cost $1 billion or even $10 billion, but like $100 billion plus. And we couldn't do that as a nonprofit. So that was the fundamental reason for it.
We thought with a billion dollars, we could make substantial progress towards what we were trying to do. As we learned more and got into the scaling language model world, We realized that it was not going to cost $1 billion or even $10 billion, but like $100 billion plus. And we couldn't do that as a nonprofit. So that was the fundamental reason for it.
We thought with a billion dollars, we could make substantial progress towards what we were trying to do. As we learned more and got into the scaling language model world, We realized that it was not going to cost $1 billion or even $10 billion, but like $100 billion plus. And we couldn't do that as a nonprofit. So that was the fundamental reason for it.
And every other effort pursuing AI has realized this and is set up in some way where they can sort of access capital markets.
And every other effort pursuing AI has realized this and is set up in some way where they can sort of access capital markets.
And every other effort pursuing AI has realized this and is set up in some way where they can sort of access capital markets.
Oh, well, I think there are people who will really be a jerk on Twitter who will still not like abuse the system of a country they're now in a sort of extremely influential political role for. That seems completely different to me.
Oh, well, I think there are people who will really be a jerk on Twitter who will still not like abuse the system of a country they're now in a sort of extremely influential political role for. That seems completely different to me.