Menu
Sign In Search Podcasts Charts People & Topics Add Podcast API Pricing
Podcast Image

Lex Fridman Podcast

#367 – Sam Altman: OpenAI CEO on GPT-4, ChatGPT, and the Future of AI

25 Mar 2023

2h 28m duration
24292 words
3 speakers
25 Mar 2023
Description

Sam Altman is the CEO of OpenAI, the company behind GPT-4, ChatGPT, DALL-E, Codex, and many other state-of-the-art AI technologies. Please support this podcast by checking out our sponsors: - NetSuite: http://netsuite.com/lex to get free product tour - SimpliSafe: https://simplisafe.com/lex - ExpressVPN: https://expressvpn.com/lexpod to get 3 months free EPISODE LINKS: Sam's Twitter: https://twitter.com/sama OpenAI's Twitter: https://twitter.com/OpenAI OpenAI's Website: https://openai.com GPT-4 Website: https://openai.com/research/gpt-4 PODCAST INFO: Podcast website: https://lexfridman.com/podcast Apple Podcasts: https://apple.co/2lwqZIr Spotify: https://spoti.fi/2nEwCF8 RSS: https://lexfridman.com/feed/podcast/ YouTube Full Episodes: https://youtube.com/lexfridman YouTube Clips: https://youtube.com/lexclips SUPPORT & CONNECT: - Check out the sponsors above, it's the best way to support this podcast - Support on Patreon: https://www.patreon.com/lexfridman - Twitter: https://twitter.com/lexfridman - Instagram: https://www.instagram.com/lexfridman - LinkedIn: https://www.linkedin.com/in/lexfridman - Facebook: https://www.facebook.com/lexfridman - Medium: https://medium.com/@lexfridman OUTLINE: Here's the timestamps for the episode. On some podcast players you should be able to click the timestamp to jump to that time. (00:00) - Introduction (08:41) - GPT-4 (20:06) - Political bias (27:07) - AI safety (47:47) - Neural network size (51:40) - AGI (1:13:09) - Fear (1:15:18) - Competition (1:17:38) - From non-profit to capped-profit (1:20:58) - Power (1:26:11) - Elon Musk (1:34:37) - Political pressure (1:52:51) - Truth and misinformation (2:05:13) - Microsoft (2:09:13) - SVB bank collapse (2:14:04) - Anthropomorphism (2:18:07) - Future applications (2:21:59) - Advice for young people (2:24:37) - Meaning of life

Audio
Featured in this Episode
Transcription

Chapter 1: What is the significance of AI in today's society?

0.031 - 21.55 Lex Fridman

The following is a conversation with Sam Altman, CEO of OpenAI, the company behind GPT-4, JadGPT, DALI, Codex, and many other AI technologies, which both individually and together constitute some of the greatest breakthroughs in the history of artificial intelligence, computing, and humanity in general.

0

22.053 - 41.861 Lex Fridman

Please allow me to say a few words about the possibilities and the dangers of AI in this current moment in the history of human civilization. I believe it is a critical moment. We stand on the precipice of fundamental societal transformation, where soon, nobody knows when, but many, including me, believe it's within our lifetime.

0

42.662 - 61.143 Lex Fridman

The collective intelligence of the human species begins to pale in comparison. by many orders of magnitude, to the general superintelligence in the AI systems we build and deploy at scale. this is both exciting and terrifying.

0

61.964 - 81.878 Lex Fridman

It is exciting because of the innumerable applications we know and don't yet know that will empower humans to create, to flourish, to escape the widespread poverty and suffering that exists in the world today, and to succeed in that old, all-too-human pursuit of happiness.

0

82.887 - 103.765 Lex Fridman

It is terrifying because of the power that super-intelligent AGI wields to destroy human civilization, intentionally or unintentionally. The power to suffocate the human spirit in the totalitarian way of George Orwell's 1984 or the pleasure-fueled mass hysteria

103.745 - 128.649 Lex Fridman

a brave new world where, as Huxley saw it, people come to love their oppression, to adore the technologies that undo their capacities to think. That is why these conversations with the leaders, engineers, and philosophers, both optimists and cynics, is important now. These are not merely technical conversations about AI.

129.23 - 155.951 Lex Fridman

These are conversations about power, about companies, institutions, and political systems that deploy, check, and balance this power, about distributed economic systems that incentivize the safety and human alignment of this power. about the psychology of the engineers and leaders that deploy AGI, and about the history of human nature, our capacity for good and evil at scale.

157.273 - 185.809 Lex Fridman

I'm deeply honored to have gotten to know and to have spoken with on and off the mic with many folks who now work at OpenAI, including Sam Altman, Greg Brockman, Ilya Sitskever, Wojciech, Zaremba, Andrzej Karpathy, Jakub Paczalki, and many others. It means the world that Sam has been totally open with me, willing to have multiple conversations, including challenging ones, on and off the mic.

186.65 - 213.742 Lex Fridman

I will continue to have these conversations to both celebrate the incredible accomplishments of the AI community and to steel man the critical perspective on major decisions various companies and leaders make. always with the goal of trying to help in my small way. If I fail, I will work hard to improve. I love you all. And now a quick few second mention of each sponsor.

Chapter 2: How does AI pose both excitement and terror?

407.384 - 432.186 Lex Fridman

This show is also brought to you by ExpressVPN. Speaking of security, this is how you protect yourself in the digital space. This should be the first layer in the digital space. I've used them for so, so, so many years. The big sexy red button, I would just press it and I would escape from the place I am to any place I want to be.

0

433.617 - 462.207 Lex Fridman

That is somewhat metaphorical, but as far as the internet is concerned, it is quite literal. This is useful for all kinds of reasons. One, it just increases the level of privacy that you have while browsing the internet. Of course, it also allows you to interact with streaming services that constraint what shows can be watched based on your geographic location.

0

462.187 - 488.257 Lex Fridman

To me, just like I said, I love it when a product, when a piece of software does one thing and does it exceptionally well. It's done that for me for many, many years. It's fast. It works on any device, any operating system, including Linux, Android, Windows, anything and everything. You should be definitely using a VPN. ExpressVPN is the one I've been using. It's the one I recommend.

0

488.818 - 526.77 Lex Fridman

Go to expressvpn.com slash lexpod for an extra three months free. This is the Lex Friedman Podcast. To support it, please check out our sponsors in the description. And now, dear friends, here's Sam Altman. High level, what is GPT for? How does it work and what to use most amazing about it?

0

527.571 - 548.081 Sam Altman

It's a system that we'll look back at and say was a very early AI. And it's slow, it's buggy, it doesn't do a lot of things very well, but neither did the very earliest computers. And they still pointed a path to something that was going to be really important in our lives, even though it took a few decades to evolve.

548.422 - 548.522

Yeah.

548.502 - 564.67 Lex Fridman

Do you think this is a pivotal moment? Like out of all the versions of GPT 50 years from now, when they look back at an early system that was really kind of a leap, you know, in a Wikipedia page about the history of artificial intelligence, which of the GPTs would they put?

564.87 - 587.662 Sam Altman

That is a good question. I sort of think of progress as this continual exponential thing. It's not like we could say here was the moment where AI went from not happening to happening. And I'd have a very hard time pinpointing a single thing. I think it's this very continual curve. Will the history books write about GPT-1 or 2 or 3 or 4 or 7? That's for them to decide. I don't really know.

587.822 - 600.847 Sam Altman

I think... If I had to pick some moment from what we've seen so far, I'd sort of pick ChatGPT. You know, it wasn't the underlying model that mattered. It was the usability of it, both the RLHF and the interface to it.

Chapter 3: What are the key conversations surrounding AI power dynamics?

683.57 - 693.685 Lex Fridman

And then somehow adding a little bit of human guidance on top of it through this process makes it seem so much more awesome.

0

695.027 - 704.582 Sam Altman

Maybe just because it's much easier to use. It's much easier to get what you want. You get it right more often the first time, and ease of use matters a lot, even if the base capability was there before.

0

704.562 - 723.947 Lex Fridman

And like a feeling like it understood the question you're asking or like it feels like you're kind of on the same page. It's trying to help you. It's the feeling of alignment. Yes. I mean, that could be a more technical term for it. And you're saying that not much, data is required for that, not much human supervision is required for that.

0

723.967 - 736.749 Sam Altman

To be fair, we understand the science of this part at a much earlier stage than we do the science of creating these large pre-trained models in the first place, but yes, less data. Much less data. That's so interesting.

0

736.809 - 755.358 Lex Fridman

The science of human guidance. That's a very interesting science, and it's going to be a very important science to understand. how to make it usable, how to make it wise, how to make it ethical, how to make it aligned in terms of all the kind of stuff we think about.

758.062 - 781.493 Lex Fridman

And it matters which are the humans and what is the process of incorporating that human feedback and what are you asking the humans? Is it two things? Are you asking them to rank things? What aspects are you letting or asking the humans to focus in on? It's really fascinating. What is the data set it's trained on? Can you kind of loosely speak to the enormity of this data set?

781.513 - 783.875 Lex Fridman

The pre-training data set? The pre-training data set, I apologize.

784.656 - 800.213 Sam Altman

We spend a huge amount of effort pulling that together from many different sources. There are open source databases of information. We get stuff via partnerships. There's things on the internet. A lot of our work is building a great data set.

801.425 - 804.492 Lex Fridman

How much of it is the memes subreddit? Not very much.

Chapter 4: What insights can we gain from interactions with OpenAI leaders?

840.962 - 850.472 Lex Fridman

There's the selection of the data. There's the human supervised aspect of it with RL with human feedback.

0

850.492 - 872.792 Sam Altman

Yeah, I think one thing that is not that well understood about creation of this final product, like what it takes to make GPT-4, the version of it we actually ship out that you get to use inside of ChatGPT, the number of pieces that have to all come together and then we have to figure out either new ideas or just execute existing ideas really well at every stage of this pipeline.

0

872.772 - 874.494 Sam Altman

There's quite a lot that goes into it.

0

875.115 - 892.839 Lex Fridman

So there's a lot of problem solving. Like you've already said for GPT-4 in the blog post and in general, there's already kind of a maturity that's happening on some of these steps. Like being able to predict before doing the full training of how the model will behave.

0

892.859 - 903.653 Sam Altman

Isn't that so remarkable, by the way, that there's like a law of science that lets you predict for these inputs, here's what's going to come out the other end. Like here's the level of intelligence you can expect.

904.034 - 915.853 Lex Fridman

Is it close to a science or is it still, because you said the word law and science, which are very ambitious terms. Close to, I say. Close to, right. Be accurate, yes.

916.034 - 919.7 Sam Altman

I'll say it's way more scientific than I ever would have dared to imagine.

920.02 - 927.973 Lex Fridman

So you can really know the peculiar characteristics of the fully trained system from just a little bit of training.

928.19 - 948.214 Sam Altman

Like any new branch of science, we're going to discover new things that don't fit the data and have to come up with better explanations. And that is the ongoing process of discovering science. But with what we know now, even what we had in that GPT-4 blog post, I think we should all just be in awe of how amazing it is that we can even predict to this current level.

Chapter 5: How did OpenAI's transition from nonprofit to capped-profit occur?

4646.727 - 4647.848 Sam Altman

We don't get mocked as much now.

0

4648.672 - 4666.042 Lex Fridman

Don't get mocked as much now. So speaking about the structure of the, of the org. So OpenAI went, stopped being nonprofit or split up in, can you describe that whole process?

0

4666.062 - 4687.403 Sam Altman

How did it stand? We started as a nonprofit. We learned early on that we were going to need far more capital than we were able to raise as a nonprofit. Our nonprofit is still fully in charge. There is a subsidiary capped profit so that our investors and employees can earn a certain fixed return. And then beyond that, everything else flows to the nonprofit.

0

4687.443 - 4710.975 Sam Altman

And the nonprofit is like in voting control, lets us make a bunch of nonstandard decisions, can cancel equity, can do a whole bunch of other things, can let us merge with another org, protects us from making decisions that are not in any like shareholder's interest. So I think as a structure, it has been important to a lot of the decisions we've made.

0

4711.376 - 4723.294 Lex Fridman

What went into that decision process for taking a leap from nonprofit to capped for-profit? What are the pros and cons you were deciding at the time? I mean, this was a point 19.

4723.354 - 4742.512 Sam Altman

It was really like, To do what we needed to go do, we had tried and failed enough to raise the money as a nonprofit. We didn't see a path forward there. So we needed some of the benefits of capitalism, but not too much. I remember at the time someone said, you know, as a nonprofit, not enough will happen. As a for-profit, too much will happen.

Chapter 6: What are the implications of AGI on competition and safety?

4743.173 - 4744.835 Sam Altman

So we need this sort of strange intermediate.

0

4746.797 - 4774.551 Lex Fridman

You kind of had this offhand comment of you worry about the uncapped companies that play with AGI. Can you elaborate on the worry here? Because AGI, out of all the technologies we have in our hands, has the potential to make, the cap is 100x for OpenAI. It started, it's much, much lower for new investors now. You know, AGI can make a lot more than 100x. For sure.

0

4775.312 - 4783.54 Lex Fridman

And so how do you, like how do you compete, like stepping outside of OpenAI, how do you look at a world where Google is playing

0

4783.52 - 4805.157 Sam Altman

where apple and these and meta are playing we can't control what other people are going to do um we can try to like build something and talk about it and influence others and provide value and you know good systems for the world but they're going to do what they're going to do now i i think right now there's like

0

4808.242 - 4835.025 Sam Altman

extremely fast and not super deliberate motion inside of some of these companies but already i think people are as they see the rate of progress already people are grappling with what's at stake here and i think the better angels are going to win out can you elaborate on that the better angels of individuals the individuals and companies but you know the incentives of capitalism to create and capture unlimited value

4836.558 - 4856.042 Sam Altman

I'm a little afraid of, but again, no, I think no one wants to destroy the world. No one wakes up saying like today, I want to destroy the world. So we've got the Malik problem. On the other hand, we've got people who are very aware of that. And I think a lot of healthy conversation about how can we collaborate to minimize some of these very scary downsides.

4858.805 - 4872.789 Lex Fridman

Well, nobody wants to destroy the world. Let me ask you a tough question. So, You are very likely to be one of, not the person that creates AGI. One of. One of.

4873.33 - 4884.746 Sam Altman

And even then, like, we're on a team of many. There'll be many teams. But... Several teams. Small number of people, nevertheless, relative. I do think it's strange that it's maybe a few tens of thousands of people in the world.

Chapter 7: What role does feedback play in developing AI technology?

4884.766 - 4886.709 Sam Altman

A few thousands of people in the world.

0
0

4887.25 - 4892.578 Lex Fridman

But there will be a room with a few folks who are like, holy shit.

0

4892.598 - 4894.181 Sam Altman

That happens more often than you would think now.

0

4894.461 - 4897.766 Lex Fridman

I understand. I understand this. I understand this.

4897.786 - 4899.309 Sam Altman

But yes, there will be more such rooms.

4899.429 - 4914.372 Lex Fridman

Which is a beautiful place to be in the world. Terrifying, but mostly beautiful. So that might make you and a handful of folks the most powerful humans on earth. Do you worry that power might corrupt you?

4915.077 - 4933.268 Sam Altman

For sure. Look, I don't... I think you want... Decisions about this technology and certainly decisions about who is running this technology to become increasingly democratic over time.

4934.069 - 4953.927 Sam Altman

We haven't figured out quite how to do this, but part of the reason for deploying like this is to get the world to have time to adapt and to reflect and to think about this, to pass regulation for institutions to come up with new norms for the people working on it together. That is a huge part of why we deploy.

Chapter 8: How does Sam Altman view the future of AI and human collaboration?

5039.783 - 5053.463 Lex Fridman

What does knowing the people at OpenAI have to do with it? Because I know they're good people. I know a lot of people. I know they're good human beings. From a perspective of people that don't know the human beings, there's a concern of the super powerful technology in the hands of a few that's closed.

0

5053.943 - 5075.46 Sam Altman

It's closed in some sense, but we give more access to it. Yeah. If this had just been Google's game... I feel it's very unlikely that anyone would have put this API out. There's PR risk with it. I get personal threats because of it all the time. I think most companies wouldn't have done this. So maybe we didn't go as open as people wanted, but like we've distributed it pretty broadly.

0

5076.141 - 5096.351 Lex Fridman

You personally in opening eyes culture is not so like nervous about PR risk and all that kind of stuff. you're more nervous about the risk of the actual technology and you reveal that. So the nervousness that people have is because it's such early days of the technology is that you will close off over time because it's more and more powerful.

0

5096.731 - 5103.981 Lex Fridman

My nervousness is you get attacked so much by fear-mongering clickbait journalism that you're like, why the hell do I need to deal with this?

0

5104.081 - 5106.544 Sam Altman

I think the clickbait journalism bothers you more than it bothers me.

5107.334 - 5123.918 Lex Fridman

No, I'm a third person bothered. I appreciate that. I feel all right about it. Of all the things I lose sleep over, it's not high on the list. Because it's important. There's a handful of companies, a handful of folks that are really pushing this forward. They're amazing folks. I don't want them to become cynical about the rest of the world.

5124.479 - 5142.763 Sam Altman

I think people at OpenAI feel the weight of responsibility of what we're doing. And yeah, it would be nice if journalists were nicer to us and Twitter trolls give us more benefit of the doubt. But I think we have a lot of resolve in what we're doing and why and the importance of it.

5144.765 - 5154.875 Sam Altman

But I really would love, and I ask this of a lot of people, not just if cameras are rolling, any feedback you've got for how we can be doing better. We're in uncharted waters here. Talking to smart people is how we figure out what to do better.

5154.895 - 5161.001 Lex Fridman

How do you take feedback? Do you take feedback from Twitter also? Because there's the sea, the waterfall.

Comments

There are no comments yet.

Please log in to write the first comment.