Menu
Sign In Search Podcasts Charts People & Topics Add Podcast API Blog Pricing

Sam Altman

๐Ÿ‘ค Speaker
See mentions of this person in podcasts
3367 total appearances

Appearances Over Time

Podcast Appearances

Lex Fridman Podcast
#419 โ€“ Sam Altman: OpenAI, GPT-5, Sora, Board Saga, Elon Musk, Ilya, Power & AGI

I worry about that for A.I.

Lex Fridman Podcast
#419 โ€“ Sam Altman: OpenAI, GPT-5, Sora, Board Saga, Elon Musk, Ilya, Power & AGI

I worry about that for A.I.

Lex Fridman Podcast
#419 โ€“ Sam Altman: OpenAI, GPT-5, Sora, Board Saga, Elon Musk, Ilya, Power & AGI

I worry about that for A.I.

Lex Fridman Podcast
#419 โ€“ Sam Altman: OpenAI, GPT-5, Sora, Board Saga, Elon Musk, Ilya, Power & AGI

I think it will get caught up in like left versus right wars. I don't know exactly what that's going to look like, but I think that's just what happens with anything of consequence, unfortunately. What I meant more about theatrical risks is like AI is going to have, I believe, tremendously more good consequences than bad ones, but it is going to have bad ones.

Lex Fridman Podcast
#419 โ€“ Sam Altman: OpenAI, GPT-5, Sora, Board Saga, Elon Musk, Ilya, Power & AGI

I think it will get caught up in like left versus right wars. I don't know exactly what that's going to look like, but I think that's just what happens with anything of consequence, unfortunately. What I meant more about theatrical risks is like AI is going to have, I believe, tremendously more good consequences than bad ones, but it is going to have bad ones.

Lex Fridman Podcast
#419 โ€“ Sam Altman: OpenAI, GPT-5, Sora, Board Saga, Elon Musk, Ilya, Power & AGI

I think it will get caught up in like left versus right wars. I don't know exactly what that's going to look like, but I think that's just what happens with anything of consequence, unfortunately. What I meant more about theatrical risks is like AI is going to have, I believe, tremendously more good consequences than bad ones, but it is going to have bad ones.

Lex Fridman Podcast
#419 โ€“ Sam Altman: OpenAI, GPT-5, Sora, Board Saga, Elon Musk, Ilya, Power & AGI

And there'll be some bad ones that are bad, but not theatrical. You know, like, A lot more people have died of air pollution than nuclear reactors, for example. But most people worry more about living next to a nuclear reactor than a coal plant.

Lex Fridman Podcast
#419 โ€“ Sam Altman: OpenAI, GPT-5, Sora, Board Saga, Elon Musk, Ilya, Power & AGI

And there'll be some bad ones that are bad, but not theatrical. You know, like, A lot more people have died of air pollution than nuclear reactors, for example. But most people worry more about living next to a nuclear reactor than a coal plant.

Lex Fridman Podcast
#419 โ€“ Sam Altman: OpenAI, GPT-5, Sora, Board Saga, Elon Musk, Ilya, Power & AGI

And there'll be some bad ones that are bad, but not theatrical. You know, like, A lot more people have died of air pollution than nuclear reactors, for example. But most people worry more about living next to a nuclear reactor than a coal plant.

Lex Fridman Podcast
#419 โ€“ Sam Altman: OpenAI, GPT-5, Sora, Board Saga, Elon Musk, Ilya, Power & AGI

But something about the way we're wired is that although there's many different kinds of risks we have to confront, the ones that make a good climax scene of a movie carry much more weight with us than the ones that are very bad over a long period of time but on a slow burn.

Lex Fridman Podcast
#419 โ€“ Sam Altman: OpenAI, GPT-5, Sora, Board Saga, Elon Musk, Ilya, Power & AGI

But something about the way we're wired is that although there's many different kinds of risks we have to confront, the ones that make a good climax scene of a movie carry much more weight with us than the ones that are very bad over a long period of time but on a slow burn.

Lex Fridman Podcast
#419 โ€“ Sam Altman: OpenAI, GPT-5, Sora, Board Saga, Elon Musk, Ilya, Power & AGI

But something about the way we're wired is that although there's many different kinds of risks we have to confront, the ones that make a good climax scene of a movie carry much more weight with us than the ones that are very bad over a long period of time but on a slow burn.

Lex Fridman Podcast
#419 โ€“ Sam Altman: OpenAI, GPT-5, Sora, Board Saga, Elon Musk, Ilya, Power & AGI

I think that's a pretty straightforward question. Maybe I can think of more nuance later, but the pros seem obvious, which is that we get better products and more innovation faster and cheaper, and all the reasons competition is good.

Lex Fridman Podcast
#419 โ€“ Sam Altman: OpenAI, GPT-5, Sora, Board Saga, Elon Musk, Ilya, Power & AGI

I think that's a pretty straightforward question. Maybe I can think of more nuance later, but the pros seem obvious, which is that we get better products and more innovation faster and cheaper, and all the reasons competition is good.

Lex Fridman Podcast
#419 โ€“ Sam Altman: OpenAI, GPT-5, Sora, Board Saga, Elon Musk, Ilya, Power & AGI

I think that's a pretty straightforward question. Maybe I can think of more nuance later, but the pros seem obvious, which is that we get better products and more innovation faster and cheaper, and all the reasons competition is good.

Lex Fridman Podcast
#419 โ€“ Sam Altman: OpenAI, GPT-5, Sora, Board Saga, Elon Musk, Ilya, Power & AGI

We spend a lot of time talking about the need to prioritize safety. And I've said for like a long time that I think if you think of a quadrant of safety, slow timelines to the start of AGI, long timelines, and then a short takeoff or a fast takeoff. I think short timelines, slow takeoff is the safest quadrant and the one I'd most like us to be in.

Lex Fridman Podcast
#419 โ€“ Sam Altman: OpenAI, GPT-5, Sora, Board Saga, Elon Musk, Ilya, Power & AGI

We spend a lot of time talking about the need to prioritize safety. And I've said for like a long time that I think if you think of a quadrant of safety, slow timelines to the start of AGI, long timelines, and then a short takeoff or a fast takeoff. I think short timelines, slow takeoff is the safest quadrant and the one I'd most like us to be in.

Lex Fridman Podcast
#419 โ€“ Sam Altman: OpenAI, GPT-5, Sora, Board Saga, Elon Musk, Ilya, Power & AGI

We spend a lot of time talking about the need to prioritize safety. And I've said for like a long time that I think if you think of a quadrant of safety, slow timelines to the start of AGI, long timelines, and then a short takeoff or a fast takeoff. I think short timelines, slow takeoff is the safest quadrant and the one I'd most like us to be in.

Lex Fridman Podcast
#419 โ€“ Sam Altman: OpenAI, GPT-5, Sora, Board Saga, Elon Musk, Ilya, Power & AGI

But I do want to make sure we get that slow takeoff.

Lex Fridman Podcast
#419 โ€“ Sam Altman: OpenAI, GPT-5, Sora, Board Saga, Elon Musk, Ilya, Power & AGI

But I do want to make sure we get that slow takeoff.