Sam Altman
👤 SpeakerAppearances Over Time
Podcast Appearances
Great.
Great.
I think too much regulation clearly has huge negative consequences in society right now in many places we have experienced. too much. I mean, Elon has also been a lot of proponent of calling for AI regulation as have the heads of most other large efforts. When you step on an airplane, you think very high likelihood it's going to be a safe experience.
I think too much regulation clearly has huge negative consequences in society right now in many places we have experienced. too much. I mean, Elon has also been a lot of proponent of calling for AI regulation as have the heads of most other large efforts. When you step on an airplane, you think very high likelihood it's going to be a safe experience.
I think too much regulation clearly has huge negative consequences in society right now in many places we have experienced. too much. I mean, Elon has also been a lot of proponent of calling for AI regulation as have the heads of most other large efforts. When you step on an airplane, you think very high likelihood it's going to be a safe experience.
When you eat food in the US, you don't think too much about food safety. Some regulation is clearly a good thing. Now, I can imagine versions of AI regulation that are really problematic and would disadvantage smaller efforts. And I think that would be a real mistake. But for some safety guardrails on the most powerful systems, that should only affect the people at the frontier.
When you eat food in the US, you don't think too much about food safety. Some regulation is clearly a good thing. Now, I can imagine versions of AI regulation that are really problematic and would disadvantage smaller efforts. And I think that would be a real mistake. But for some safety guardrails on the most powerful systems, that should only affect the people at the frontier.
When you eat food in the US, you don't think too much about food safety. Some regulation is clearly a good thing. Now, I can imagine versions of AI regulation that are really problematic and would disadvantage smaller efforts. And I think that would be a real mistake. But for some safety guardrails on the most powerful systems, that should only affect the people at the frontier.
That should only affect OpenAI and a small handful of others. I don't think we're at the level yet where these systems have huge safety implications. But I don't think we're like wildly far away either. So that's the sort of art here.
That should only affect OpenAI and a small handful of others. I don't think we're at the level yet where these systems have huge safety implications. But I don't think we're like wildly far away either. So that's the sort of art here.
That should only affect OpenAI and a small handful of others. I don't think we're at the level yet where these systems have huge safety implications. But I don't think we're like wildly far away either. So that's the sort of art here.
I don't... Well, if what they're saying is we're behind opening eyes, so it doesn't matter, and what we're calling for is only regulation at the frontier, like only stuff that is... new and untested, but otherwise put out whatever open source model you want, I don't think it's reasonable for them to make that argument. I don't know, I'm curious what you think.
I don't... Well, if what they're saying is we're behind opening eyes, so it doesn't matter, and what we're calling for is only regulation at the frontier, like only stuff that is... new and untested, but otherwise put out whatever open source model you want, I don't think it's reasonable for them to make that argument. I don't know, I'm curious what you think.
I don't... Well, if what they're saying is we're behind opening eyes, so it doesn't matter, and what we're calling for is only regulation at the frontier, like only stuff that is... new and untested, but otherwise put out whatever open source model you want, I don't think it's reasonable for them to make that argument. I don't know, I'm curious what you think.
If we do, let's say we succeed and make a super intelligence, we make this computer program that is smarter, maybe more capable than all of humanity put together, Do you think there should be any regulation on that at all or would you say just say none?
If we do, let's say we succeed and make a super intelligence, we make this computer program that is smarter, maybe more capable than all of humanity put together, Do you think there should be any regulation on that at all or would you say just say none?
If we do, let's say we succeed and make a super intelligence, we make this computer program that is smarter, maybe more capable than all of humanity put together, Do you think there should be any regulation on that at all or would you say just say none?
For sure. How and when matters a lot. But I agree with that. And I could easily see it going really wrong.
For sure. How and when matters a lot. But I agree with that. And I could easily see it going really wrong.
For sure. How and when matters a lot. But I agree with that. And I could easily see it going really wrong.