Sam Altman
๐ค SpeakerAppearances Over Time
Podcast Appearances
I'm concerned. I mean, there's so many proposed regulations, but most of the ones I've seen on the California state things I'm concerned about. I also have a general fear of the states all doing this themselves. When people say regulate AI, I don't think... they mean one thing. I think there's like, some people are like, ban the whole thing.
Some people are like, don't allow it to be open source, require it to be open source. The thing that I am personally most interested in is I think there will come Look, I may be wrong about this. I will acknowledge that this is a forward-looking statement and those are always dangerous to make.
Some people are like, don't allow it to be open source, require it to be open source. The thing that I am personally most interested in is I think there will come Look, I may be wrong about this. I will acknowledge that this is a forward-looking statement and those are always dangerous to make.
But I think there will come a time in the not super distant future, like, you know, we're not talking like decades and decades from now, where the frontier AI systems are capable of causing significant damage global harm.
But I think there will come a time in the not super distant future, like, you know, we're not talking like decades and decades from now, where the frontier AI systems are capable of causing significant damage global harm.
And for those kinds of systems, in the same way we have global oversight of nuclear weapons or synthetic bio or things that can really have a very negative impact way beyond the realm of one country, I would like to see some sort of international agency that is looking at the most powerful systems and ensuring reasonable safety testing.
And for those kinds of systems, in the same way we have global oversight of nuclear weapons or synthetic bio or things that can really have a very negative impact way beyond the realm of one country, I would like to see some sort of international agency that is looking at the most powerful systems and ensuring reasonable safety testing.
These things are not going to escape and recursively self-improve or whatever.
These things are not going to escape and recursively self-improve or whatever.
Do you feel like if the line where we're only going to look at models that are trained on computers that cost more than 10 billion or more than 100 billion or whatever dollars, I'd be fine with that. There'd be some line that'd be fine. And I don't think that puts any regulatory burden on startups.
Do you feel like if the line where we're only going to look at models that are trained on computers that cost more than 10 billion or more than 100 billion or whatever dollars, I'd be fine with that. There'd be some line that'd be fine. And I don't think that puts any regulatory burden on startups.
Well, Chamath, go ahead. You had a follow-up. Can I say one more thing about that? Of course. I'd be super nervous about regulatory overreach here. I think we can get this wrong by doing way too much, or even a little too much. I think we can get this wrong by doing not enough.
Well, Chamath, go ahead. You had a follow-up. Can I say one more thing about that? Of course. I'd be super nervous about regulatory overreach here. I think we can get this wrong by doing way too much, or even a little too much. I think we can get this wrong by doing not enough.
But I do think part of... And now, I mean, we have seen regulatory overstepping or capture just get super bad in other areas. And... you know, also maybe nothing will happen. But I think it is part of our duty and our mission to like talk about what we believe is likely to happen and what it takes to get that right.
But I do think part of... And now, I mean, we have seen regulatory overstepping or capture just get super bad in other areas. And... you know, also maybe nothing will happen. But I think it is part of our duty and our mission to like talk about what we believe is likely to happen and what it takes to get that right.
Totally. Right. Look, the reason I have pushed for... an agency-based approach for kind of like the big picture stuff and not a like write it in laws. I don't, in 12 months, it will all be written wrong. And I don't think, even if these people were like true world experts, I don't think they could get it right looking at 12 or 24 months.
Totally. Right. Look, the reason I have pushed for... an agency-based approach for kind of like the big picture stuff and not a like write it in laws. I don't, in 12 months, it will all be written wrong. And I don't think, even if these people were like true world experts, I don't think they could get it right looking at 12 or 24 months.
And I don't, these policies, which is like, we're going to look at, you know, we're going to audit all of your source code and like look at all of your weights one by one. Like, I think there's a lot of crazy proposals out there.
And I don't, these policies, which is like, we're going to look at, you know, we're going to audit all of your source code and like look at all of your weights one by one. Like, I think there's a lot of crazy proposals out there.
Again, this is why I think it's... But, like, when... Before an airplane gets certified, there's, like, a set of safety tests. We put the airplane through it, and... Totally. It's different than reading all of your code.