Sam Altman
๐ค SpeakerAppearances Over Time
Podcast Appearances
Again, this is why I think it's... But, like, when... Before an airplane gets certified, there's, like, a set of safety tests. We put the airplane through it, and... Totally. It's different than reading all of your code.
And so what I was going to say is that is the kind of... that I think as safety testing makes sense.
And so what I was going to say is that is the kind of... that I think as safety testing makes sense.
Or do you see that... At the current strength of models... Definitely some things are going to go wrong, and I don't want to make light of those or not take those seriously. But I don't have any catastrophic risk worries with a GPT-4 level model. And I think there's many safe ways to choose to deploy this.
Or do you see that... At the current strength of models... Definitely some things are going to go wrong, and I don't want to make light of those or not take those seriously. But I don't have any catastrophic risk worries with a GPT-4 level model. And I think there's many safe ways to choose to deploy this.
Maybe we'd find more common ground if we said that, like, you know, the specific example of models that are capable, that are technically capable, even if they're not going to be used this way, of recursive self-improvement or of, you know, autonomously designing and deploying a bioweapon or something like that. Or a new model. Yeah. That was the recursive self-improvement point.
Maybe we'd find more common ground if we said that, like, you know, the specific example of models that are capable, that are technically capable, even if they're not going to be used this way, of recursive self-improvement or of, you know, autonomously designing and deploying a bioweapon or something like that. Or a new model. Yeah. That was the recursive self-improvement point.
We should have safety testing on the outputs at an international level for models that have a reasonable chance of posing a threat there. I don't think GPT-4, of course, does not...
We should have safety testing on the outputs at an international level for models that have a reasonable chance of posing a threat there. I don't think GPT-4, of course, does not...
pose it in any sort of well, I don't say any sort because We don't yeah, I don't think the GPT-4 poses a material threat on those kinds of things And I think there's many safe ways to release a model like this but you know when like significant loss of human life is a serious possibility like airplanes or
pose it in any sort of well, I don't say any sort because We don't yeah, I don't think the GPT-4 poses a material threat on those kinds of things And I think there's many safe ways to release a model like this but you know when like significant loss of human life is a serious possibility like airplanes or
any number of other examples where I think we're happy to have some sort of testing framework. Like I don't think about an airplane when I get on it. I just assume it's going to be safe.
any number of other examples where I think we're happy to have some sort of testing framework. Like I don't think about an airplane when I get on it. I just assume it's going to be safe.
Our results on that come out very soon. It was a five-year study that wrapped up or started five years ago. Well, there was like a beta study first and then it was like a long one that ran.
Our results on that come out very soon. It was a five-year study that wrapped up or started five years ago. Well, there was like a beta study first and then it was like a long one that ran.
So we started thinking about this in 2016, kind of about the same time, started taking AI really seriously. And the theory was that the magnitude of the change that may come to society and jobs and the economy, and sort of in some deeper sense than that, like what the social contract looks like, meant that we should have many studies to study many ideas about new ways to arrange that.
So we started thinking about this in 2016, kind of about the same time, started taking AI really seriously. And the theory was that the magnitude of the change that may come to society and jobs and the economy, and sort of in some deeper sense than that, like what the social contract looks like, meant that we should have many studies to study many ideas about new ways to arrange that.
I also think that I'm not a super fan of how the government has handled most policies designed to help poor people. And I kind of believe that if you could just give people money, they would make good decisions and the market would do its thing. And, you know, I'm very much in favor of lifting up the floor and reducing, eliminating poverty.
I also think that I'm not a super fan of how the government has handled most policies designed to help poor people. And I kind of believe that if you could just give people money, they would make good decisions and the market would do its thing. And, you know, I'm very much in favor of lifting up the floor and reducing, eliminating poverty.
But I'm interested in better ways to do that than what we have tried for the existing social safety net and kind of the way things have been handled. And I think giving people money is not going to go solve all problems. It's certainly not going to make people happy, but it might solve some problems and it might give people a better horizon with which to help themselves.