Sam Altman
š¤ SpeakerAppearances Over Time
Podcast Appearances
I think it's like four withdraws, changes, goes for this preliminary injunction, whatever. Our job is to build AGI in a way that benefits humanity and figure out how to safely and broadly distribute it. Our job is not to engage in like a Twitter fight with Elon. But when we have to respond to illegal filing, we will and sometimes we'll provide context. I think we've only done this twice.
I think it's like four withdraws, changes, goes for this preliminary injunction, whatever. Our job is to build AGI in a way that benefits humanity and figure out how to safely and broadly distribute it. Our job is not to engage in like a Twitter fight with Elon. But when we have to respond to illegal filing, we will and sometimes we'll provide context. I think we've only done this twice.
I think it's like four withdraws, changes, goes for this preliminary injunction, whatever. Our job is to build AGI in a way that benefits humanity and figure out how to safely and broadly distribute it. Our job is not to engage in like a Twitter fight with Elon. But when we have to respond to illegal filing, we will and sometimes we'll provide context. I think we've only done this twice.
One, I think I was, like, a little bit wrong about that. And I have beenāalthough I have had concerns, I have been impressed by how much not just us but the other AI labs haveā even though they have this like wild sort of market or economic incentive, have really been focused on developing safe models. I think there's many factors that went into that.
One, I think I was, like, a little bit wrong about that. And I have beenāalthough I have had concerns, I have been impressed by how much not just us but the other AI labs haveā even though they have this like wild sort of market or economic incentive, have really been focused on developing safe models. I think there's many factors that went into that.
One, I think I was, like, a little bit wrong about that. And I have beenāalthough I have had concerns, I have been impressed by how much not just us but the other AI labs haveā even though they have this like wild sort of market or economic incentive, have really been focused on developing safe models. I think there's many factors that went into that.
We did get a little lucky on the direction the technology went. But also if you deploy these models in a way that is harmful to people, you would like very quickly, I believe, lose your license to operate if it was an obvious one. Now, there are subtle things that can go wrong.
We did get a little lucky on the direction the technology went. But also if you deploy these models in a way that is harmful to people, you would like very quickly, I believe, lose your license to operate if it was an obvious one. Now, there are subtle things that can go wrong.
We did get a little lucky on the direction the technology went. But also if you deploy these models in a way that is harmful to people, you would like very quickly, I believe, lose your license to operate if it was an obvious one. Now, there are subtle things that can go wrong.
I think social media is an example of a place where maybe the harms weren't so obvious at the time, and then there was an emergent property at scale, and you could imagine something happening with AI that could be like that.
I think social media is an example of a place where maybe the harms weren't so obvious at the time, and then there was an emergent property at scale, and you could imagine something happening with AI that could be like that.
I think social media is an example of a place where maybe the harms weren't so obvious at the time, and then there was an emergent property at scale, and you could imagine something happening with AI that could be like that.
But the incentive problem has been better than I thought at the time, and I will cheerfully say I was a little bit naive about how the world works 10 years ago, and I feel better now.
But the incentive problem has been better than I thought at the time, and I will cheerfully say I was a little bit naive about how the world works 10 years ago, and I feel better now.
But the incentive problem has been better than I thought at the time, and I will cheerfully say I was a little bit naive about how the world works 10 years ago, and I feel better now.
Oh, the pressure, the societal pressure on big companies and sort of the power of researchers to push their companies to do the right thing, even in the face of this gigantic profit motive, have been pretty good.
Oh, the pressure, the societal pressure on big companies and sort of the power of researchers to push their companies to do the right thing, even in the face of this gigantic profit motive, have been pretty good.
Oh, the pressure, the societal pressure on big companies and sort of the power of researchers to push their companies to do the right thing, even in the face of this gigantic profit motive, have been pretty good.
But there is something that I don't feel naive about that I felt at the time too, which is it continues to be fairly crazy to me that this is happening in the hands of a small number of private companies. To me, this feels like the Manhattan Project or the Apollo program of our time. And those were not done by private companies. And I think it's like a mark of a well-functioning society.
But there is something that I don't feel naive about that I felt at the time too, which is it continues to be fairly crazy to me that this is happening in the hands of a small number of private companies. To me, this feels like the Manhattan Project or the Apollo program of our time. And those were not done by private companies. And I think it's like a mark of a well-functioning society.