Sam Altman
๐ค SpeakerAppearances Over Time
Podcast Appearances
I worry about that for A.I.
I worry about that for A.I.
I worry about that for A.I.
I think it will get caught up in like left versus right wars. I don't know exactly what that's going to look like, but I think that's just what happens with anything of consequence, unfortunately. What I meant more about theatrical risks is like AI is going to have, I believe, tremendously more good consequences than bad ones, but it is going to have bad ones.
I think it will get caught up in like left versus right wars. I don't know exactly what that's going to look like, but I think that's just what happens with anything of consequence, unfortunately. What I meant more about theatrical risks is like AI is going to have, I believe, tremendously more good consequences than bad ones, but it is going to have bad ones.
I think it will get caught up in like left versus right wars. I don't know exactly what that's going to look like, but I think that's just what happens with anything of consequence, unfortunately. What I meant more about theatrical risks is like AI is going to have, I believe, tremendously more good consequences than bad ones, but it is going to have bad ones.
And there'll be some bad ones that are bad, but not theatrical. You know, like, A lot more people have died of air pollution than nuclear reactors, for example. But most people worry more about living next to a nuclear reactor than a coal plant.
And there'll be some bad ones that are bad, but not theatrical. You know, like, A lot more people have died of air pollution than nuclear reactors, for example. But most people worry more about living next to a nuclear reactor than a coal plant.
And there'll be some bad ones that are bad, but not theatrical. You know, like, A lot more people have died of air pollution than nuclear reactors, for example. But most people worry more about living next to a nuclear reactor than a coal plant.
But something about the way we're wired is that although there's many different kinds of risks we have to confront, the ones that make a good climax scene of a movie carry much more weight with us than the ones that are very bad over a long period of time but on a slow burn.
But something about the way we're wired is that although there's many different kinds of risks we have to confront, the ones that make a good climax scene of a movie carry much more weight with us than the ones that are very bad over a long period of time but on a slow burn.
But something about the way we're wired is that although there's many different kinds of risks we have to confront, the ones that make a good climax scene of a movie carry much more weight with us than the ones that are very bad over a long period of time but on a slow burn.
I think that's a pretty straightforward question. Maybe I can think of more nuance later, but the pros seem obvious, which is that we get better products and more innovation faster and cheaper, and all the reasons competition is good.
I think that's a pretty straightforward question. Maybe I can think of more nuance later, but the pros seem obvious, which is that we get better products and more innovation faster and cheaper, and all the reasons competition is good.
I think that's a pretty straightforward question. Maybe I can think of more nuance later, but the pros seem obvious, which is that we get better products and more innovation faster and cheaper, and all the reasons competition is good.
We spend a lot of time talking about the need to prioritize safety. And I've said for like a long time that I think if you think of a quadrant of safety, slow timelines to the start of AGI, long timelines, and then a short takeoff or a fast takeoff. I think short timelines, slow takeoff is the safest quadrant and the one I'd most like us to be in.
We spend a lot of time talking about the need to prioritize safety. And I've said for like a long time that I think if you think of a quadrant of safety, slow timelines to the start of AGI, long timelines, and then a short takeoff or a fast takeoff. I think short timelines, slow takeoff is the safest quadrant and the one I'd most like us to be in.
We spend a lot of time talking about the need to prioritize safety. And I've said for like a long time that I think if you think of a quadrant of safety, slow timelines to the start of AGI, long timelines, and then a short takeoff or a fast takeoff. I think short timelines, slow takeoff is the safest quadrant and the one I'd most like us to be in.
But I do want to make sure we get that slow takeoff.
But I do want to make sure we get that slow takeoff.