Menu
Sign In Pricing Add Podcast
Podcast Image

Lex Fridman Podcast

#438 – Elon Musk: Neuralink and the Future of Humanity

Fri, 02 Aug 2024

Description

Elon Musk is CEO of Neuralink, SpaceX, Tesla, xAI, and CTO of X. DJ Seo is COO & President of Neuralink. Matthew MacDougall is Head Neurosurgeon at Neuralink. Bliss Chapman is Brain Interface Software Lead at Neuralink. Noland Arbaugh is the first human to have a Neuralink device implanted in his brain. Transcript: https://lexfridman.com/elon-musk-and-neuralink-team-transcript Please support this podcast by checking out our sponsors: https://lexfridman.com/sponsors/ep438-sc SPONSOR DETAILS: - Cloaked: https://cloaked.com/lex and use code LexPod to get 25% off - MasterClass: https://masterclass.com/lexpod to get 15% off - Notion: https://notion.com/lex - LMNT: https://drinkLMNT.com/lex to get free sample pack - Motific: https://motific.ai - BetterHelp: https://betterhelp.com/lex to get 10% off CONTACT LEX: Feedback - give feedback to Lex: https://lexfridman.com/survey AMA - submit questions, videos or call-in: https://lexfridman.com/ama Hiring - join our team: https://lexfridman.com/hiring Other - other ways to get in touch: https://lexfridman.com/contact EPISODE LINKS: Neuralink's X: https://x.com/neuralink Neuralink's Website: https://neuralink.com/ Elon's X: https://x.com/elonmusk DJ's X: https://x.com/djseo_ Matthew's X: https://x.com/matthewmacdoug4 Bliss's X: https://x.com/chapman_bliss Noland's X: https://x.com/ModdedQuad xAI: https://x.com/xai Tesla: https://x.com/tesla Tesla Optimus: https://x.com/tesla_optimus Tesla AI: https://x.com/Tesla_AI PODCAST INFO: Podcast website: https://lexfridman.com/podcast Apple Podcasts: https://apple.co/2lwqZIr Spotify: https://spoti.fi/2nEwCF8 RSS: https://lexfridman.com/feed/podcast/ YouTube Full Episodes: https://youtube.com/lexfridman YouTube Clips: https://youtube.com/lexclips SUPPORT & CONNECT: - Check out the sponsors above, it's the best way to support this podcast - Support on Patreon: https://www.patreon.com/lexfridman - Twitter: https://twitter.com/lexfridman - Instagram: https://www.instagram.com/lexfridman - LinkedIn: https://www.linkedin.com/in/lexfridman - Facebook: https://www.facebook.com/lexfridman - Medium: https://medium.com/@lexfridman OUTLINE: Here's the timestamps for the episode. On some podcast players you should be able to click the timestamp to jump to that time. (00:00) - Introduction (09:26) - Elon Musk (12:42) - Telepathy (19:22) - Power of human mind (23:49) - Future of Neuralink (29:04) - Ayahuasca (38:33) - Merging with AI (43:21) - xAI (45:34) - Optimus (52:24) - Elon's approach to problem-solving (1:09:59) - History and geopolitics (1:14:30) - Lessons of history (1:18:49) - Collapse of empires (1:26:32) - Time (1:29:14) - Aliens and curiosity (1:36:48) - DJ Seo (1:44:57) - Neural dust (1:51:40) - History of brain–computer interface (1:59:44) - Biophysics of neural interfaces (2:10:12) - How Neuralink works (2:16:03) - Lex with Neuralink implant (2:36:01) - Digital telepathy (2:47:03) - Retracted threads (2:52:38) - Vertical integration (2:59:32) - Safety (3:09:27) - Upgrades (3:18:30) - Future capabilities (3:47:46) - Matthew MacDougall (3:53:35) - Neuroscience (4:00:44) - Neurosurgery (4:11:48) - Neuralink surgery (4:30:57) - Brain surgery details (4:46:40) - Implanting Neuralink on self (5:02:34) - Life and death (5:11:54) - Consciousness (5:14:48) - Bliss Chapman (5:28:04) - Neural signal (5:34:56) - Latency (5:39:36) - Neuralink app (5:44:17) - Intention vs action (5:55:31) - Calibration (6:05:03) - Webgrid (6:28:05) - Neural decoder (6:48:40) - Future improvements (6:57:36) - Noland Arbaugh (6:57:45) - Becoming paralyzed (7:11:20) - First Neuralink human participant (7:15:21) - Day of surgery (7:33:08) - Moving mouse with brain (7:58:27) - Webgrid (8:06:28) - Retracted threads (8:14:53) - App improvements (8:21:38) - Gaming (8:32:36) - Future Neuralink capabilities (8:35:31) - Controlling Optimus robot (8:39:53) - God

Audio
Transcription

0.049 - 21.111 Lex Fridman

The following is a conversation with Elon Musk, DJ Sa, Matthew McDougall, Bliss Chapman, and Noland Arbaugh about Neuralink and the future of humanity. Elon, DJ, Matthew, and Bliss are, of course, part of the amazing Neuralink team. And Noland is the first human to have a Neuralink device implanted in his brain.

0
💬 0

22.003 - 46.818 Lex Fridman

I speak with each of them individually, so use timestamps to jump around or, as I recommend, go hardcore and listen to the whole thing. This is the longest podcast I've ever done. It's a fascinating, super technical, and wide-ranging conversation, and I loved every minute of it. And now a quick few second mention of each sponsor. Check them out in the description.

0
💬 0

46.878 - 63.775 Lex Fridman

It's the best way to support this podcast. We've got Cloak for privacy, Masterclass for learning, Notion for taking notes, Element for hydration, Motific for generative AI deployment, and BetterHelp for mental health. Choose wisely, my friends.

0
💬 0

64.596 - 82.97 Lex Fridman

Also, if you want to maybe submit feedback or submit questions that I can answer in the podcast or just get in touch with me, go to lexfriedman.com slash contact. And now onto the full ad reads. I try to make these interesting, but if you do skip them, please still check out our sponsors. I enjoy their stuff. Maybe you will too.

0
💬 0

84.01 - 108.81 Lex Fridman

This episode is brought to you by Cloaked, a platform that lets you generate new email address and a phone number every time you sign up for a new website, allowing your actual email and your actual phone number to remain secret from said website. It seems that increasingly the right approach to the interwebs is trust no one.

0
💬 0

110.291 - 134.292 Lex Fridman

Of course, there's big companies that have an implied trust because you and them understand that if you give your data over to them and they abuse that privilege, that they would suffer as a company. Now, I don't know if they fully understand that because I think even big companies can probably sell your data or share your data for purposes of making money, all that kind of stuff.

0
💬 0

134.432 - 162.034 Lex Fridman

It's just nice to not give over your contact data unless you need to. So Cloak solves that problem, makes it super easy. It's basically a password manager with extra privacy superpowers. Go to cloak.com slash Lex to get 14 days free. Or, for a limited time, use code LEXPOD when signing up to get 25% off of an annual cloaked plan.

0
💬 0

162.915 - 186.092 Lex Fridman

This episode is also brought to you by Masterclass, where you can watch over 200 classes from the best people in the world at their respective disciplines. Phil Ivey on poker, for example. Brilliant Masterclass. And also, reminds me of the other Phil, possibly the greatest, of all time, and if you ask him, he will definitely say he's the greatest of all time, which is Phil Hellmuth.

0
💬 0

186.852 - 214.313 Lex Fridman

We were supposed to do a podcast many, many times, but I'm just not sure I can handle the level of greatness that is Phil Hellmuth. No, I love him. We'll probably have a podcast at some point in the future. I'm not sure he has a masterclass, but he, his essence, his way of being, his infinite wisdom, and the infinite number of championships that he has won is in itself a masterclass.

0
💬 0

215.634 - 238.962 Lex Fridman

But if you want to settle for another mere mortal that some people consider to be the greatest poker player of all time is Phil Ivey, and he has an incredible masterclass on there. Get unlimited access to every MasterClass and get an additional 15% off annual membership at masterclass.com slash lexpod. That's masterclass.com slash lexpod.

0
💬 0

240.043 - 261.213 Lex Fridman

This episode is also brought to you by Notion, a note-taking and team collaboration tool that I've used for a long time now. I've used it primarily for note-taking because, you know, you need a big team for team collaboration. but the people who I know who have used it for the team collaboration capabilities have really loved it.

0
💬 0

261.393 - 277.726 Lex Fridman

And the thing I very much appreciate about Notion is how effectively they've been able to integrate LLMs into their tool. Their AI assistant looks across multiple documents. You can ask questions about those multiple documents. Of course, you can do all the

0
💬 0

278.326 - 298.259 Lex Fridman

things you kind of expect and do them easily, like summarization or rewriting stuff or helping expand or contract the kind of stuff you've written or even generate a draft. But it can also kind of allow you to ask questions of a thing, like what's the progress of the team on a set of different tasks. Notion does a good job of integrating the LLMs.

0
💬 0

298.92 - 324.538 Lex Fridman

Try Notion AI for free when you go to notion.com slash lex. That's all lowercase, notion.com slash lex to try the power of Notion AI today. This episode is brought to you by the thing I'm drinking right now called Element. It's my daily zero sugar and delicious electrolyte mix. They sent me a bunch of cans of sparkling water that I loved and devoured.

0
💬 0

325.539 - 349.221 Lex Fridman

as much as you can devour a liquid, because I think that's usually applied to solid foods, but I devoured it, and it was delicious. But yeah, it's an instrumental part of my life. It's how I get the sodium, potassium, magnesium, electrolytes into my body. I'm going for a super long run after this, and I have been drinking element before, and I sure as heck going to be drinking element after.

0
💬 0

349.341 - 373.48 Lex Fridman

Same goes for hard training sessions and grappling. Essential for me to feel good, especially when I'm fasting, especially when I'm doing low-carb diets, all of that. My favorite flavor still to this day always has been is watermelon salt, but there's a lot of other delicious flavors if you want to try them out. Get a sample pack for free with any purchase. Try it at drinkelements.com.

0
💬 0

374.48 - 392.312 Lex Fridman

This episode is also brought to you by Motific, a SaaS platform that helps businesses deploy LLMs that are customized with RAG on organization data. This is another use case of LLMs, which is just mind-blowing. Take all the data inside an organization and

0
💬 0

393.596 - 410.322 Lex Fridman

and allow the people in said organization to query it, to organize it, to summarize it, to analyze it, all of that, to leverage it within different products, to ask questions of how it can be improved in terms of structuring an organization.

0
💬 0

410.482 - 431.732 Lex Fridman

Also on the programming front, take all of the code in, take all of the data in, and start asking questions about how the code can be improved, how it can be refactored, rewritten, all that kind of stuff. Now, the challenge that Motific is solving is how to do all that in a secure way. This is like serious stuff. You can't F it up.

0
💬 0

432.893 - 464.497 Lex Fridman

Motific is created, I believe, by Cisco, specifically their outshift group that does the cutting edge R&D. So these guys know how to do reliable business deployment of stuff that needs to be secure. It needs to be done well. So, they help you go from idea to value as soon as possible. Visit motific.ai to learn more. That's M-O-T-I-F-I-C dot A-I.

0
💬 0

466.089 - 488.739 Lex Fridman

This episode is also brought to you by BetterHelp, spelled H-E-L-P, help. They figure out what you need and match you with a licensed therapist in under 48 hours for individuals, for couples, easy, discreet, affordable, available worldwide. I think therapy is a really, really, really nice thing. Talk therapy is a really powerful thing.

0
💬 0

488.999 - 510.332 Lex Fridman

And I think what BetterHelp does for a lot of people is introduce them to that. It's a great first step. Try it out. For a lot of people, it can work. But at the very least, it's a thing that allows you to explore the possibility of talk therapy and how that feels in your life. They've helped over 4.4 million people. That's crazy.

0
💬 0

511.053 - 535.107 Lex Fridman

I think the biggest selling point is just how easy it is to get started, how accessible it is. Of course, there's a million other ways to explore the inner workings of the human mind, looking in the mirror and exploring the union shadow. But the journey of a thousand miles begins with one step. So this is a good first step in exploring your own mind.

0
💬 0

535.808 - 550.005 Lex Fridman

Check them out at betterhelp.com slash Lex and save on your first month. That's betterhelp.com slash Lex. And now, dear friends, here's Elon Musk, his fifth time on this, the Lex Friedman podcast.

0
💬 0

566.775 - 589.459 Elon Musk

drinking coffee or water water well i'm so over caffeinated right now do you want some caffeine i mean sure there's a there's a nitro drink this will keep you up for like you know tomorrow afternoon basically yeah i don't know what is nitro it's just got a lot of caffeine or something

0
💬 0

590.216 - 591.797 DJ Seo

Don't ask questions. It's called nitro.

0
💬 0

592.798 - 613.957 Elon Musk

Do you need to know anything else? It's got nitrogen in it. That's ridiculous. I mean, what we breathe is 78% nitrogen anyway. What do you need to add more for? Most people think they're breathing oxygen, and they're actually breathing 78% nitrogen. You need like a milk bar.

0
💬 0

614.697 - 623.461 DJ Seo

Like from Clockwork Orange. Yeah. Is that top three Kubrick film for you? Clockwork Orange? It's pretty good. I mean, it's demented.

0
💬 0

624.881 - 641.29 Lex Fridman

jarring i'd say okay uh okay so first let's step back and uh big congrats on getting neural link implanted into a human that's a historic step for neural link

0
💬 0

641.99 - 642.411 Lex Fridman

Thanks, yeah.

0
💬 0

642.651 - 657.61 Elon Musk

There's many more to come. Yeah, we just obviously have our second implant as well. How did that go? So far, so good. It looks like we've got I think on the order of 400 electrodes that are providing signals.

0
💬 0

659.392 - 664.379 DJ Seo

Nice. Yeah. How quickly do you think the number of human participants will scale?

0
💬 0

665.06 - 677.414 Elon Musk

It depends somewhat on the regulatory approval, the rate at which we get regulatory approvals. So we're hoping to do 10 by the end of this year, a total of 10, so eight more.

0
💬 0

679.304 - 689.788 Lex Fridman

And with each one, you're going to be learning a lot of lessons about the neurobiology of the brain, everything, the whole chain of the neuro, like the decoding, the signal processing, all that kind of stuff.

0
💬 0

690.028 - 690.208 Lex Fridman

Yeah.

0
💬 0

690.228 - 706.314 Elon Musk

Yeah, I think it's obviously going to get better with each one. I mean, I don't want to jinx it, but it seems to have gone extremely well with the second one. So there's a lot of signal, a lot of electrodes. It's working very well.

0
💬 0

706.694 - 713.578 Lex Fridman

What improvements do you think we'll see in Neuralink in the coming, let's say, let's get crazy, coming years?

0
💬 0

713.598 - 741.429 Elon Musk

I mean, in years, it's going to be gigantic. because we'll increase the number of electrodes dramatically. We'll improve the signal processing. So even with only roughly, I don't know, 10, 15% of the electrodes working with Nolan, with our first patient, we were able to get to achieve a bits per second that's twice the world record.

0
💬 0

742.589 - 762.195 Elon Musk

So I think we'll start, like, vastly exceeding the world record by orders of magnitude in the years to come. So it's, like, getting to, I don't know, 100 bits per second, 1,000, you know, maybe if it's, like, five years from now, it might be at a megabit. Like, faster than any human could possibly communicate by typing or speaking.

0
💬 0

763.216 - 778.384 Lex Fridman

Yeah, that BPS is an interesting metric to measure. There might be a big leap in the experience once you reach a certain level of BPS. Yeah. Like entire new ways of interacting with the computer might be unlocked. And with humans.

0
💬 0

779.365 - 799.797 Elon Musk

With other humans. Provided they have a neural link too. Right. Otherwise they won't be able to absorb the signals fast enough. Do you think they'll improve the quality of intellectual discourse? Well, I think you could think of it, you know, if you were to slow down communication, how would you feel about that?

0
💬 0
0
💬 0

808.094 - 814.801 Elon Musk

So now, imagine you could communicate clearly at 10 or 100 or 1,000 times faster than normal.

0
💬 0

817.711 - 830.474 Lex Fridman

Listen, I'm pretty sure nobody in their right mind listens to me at 1x. They listen at 2x. I can only imagine what 10x would feel like, or I could actually understand it.

0
💬 0

830.494 - 859.99 Elon Musk

I usually default to 1.5x. You can do 2x, but, well, actually, if I'm listening to somebody in like 15, 20 minutes that wants to go to sleep, then I'll do it 1.5x. If I'm paying attention, I'll do 2x. Right. But actually, if you start, actually listen to podcasts or sort of audiobooks or anything, if you get used to doing it at 1.5, then 1 sounds painfully slow.

0
💬 0

860.31 - 868.112 Lex Fridman

I'm still holding on to 1 because I'm afraid. I'm afraid of myself becoming bored with the reality, with the real world where everyone's speaking in 1X. Right.

0
💬 0

870.533 - 880.739 Elon Musk

Well, it depends on the person. You can speak very fast. We communicate very quickly. And also, if you use a wide range of... If your vocabulary is larger, your effective bit rate is higher.

0
💬 0

882.9 - 891.625 Lex Fridman

That's a good way to put it. The effective bit rate. I mean, that is the question, is how much information is actually compressed in the low-bit transfer of language.

0
💬 0

892.265 - 917.567 Elon Musk

Yeah, if there's a single word that is able to convey... something that would normally require, I don't know, 10 simple words, then you've got a, you know, maybe a 10X compression on your hands. And that's really like with memes, memes are like data compression. It conveys a whole, you're simultaneously hit with a wide range of symbols that you can interpret.

0
💬 0

918.548 - 942.997 Elon Musk

And it's, you kind of get it faster than if it were words or a simple picture. And of course, you're referring to memes broadly like ideas. Yeah, there's an entire idea structure that is like an idea template. And then you can add something to that idea template. But somebody has that pre-existing idea template in their head.

0
💬 0

944.118 - 951.3 Elon Musk

So when you add that incremental bit of information, you're conveying much more than if you just said a few words. It's everything associated with that meme.

0
💬 0

952.08 - 962.351 Lex Fridman

You think there'll be emergent leaps of capability as you scale the number of electrodes? Like there'll be a certain, do you think there'll be like actual number where it just, the human experience will be altered?

0
💬 0

963.072 - 963.292 Elon Musk

Yes.

0
💬 0

964.633 - 971.12 Lex Fridman

What do you think that number might be? Whether electrodes or BPS? We of course don't know for sure, but is this 10,000, 100,000?

0
💬 0

973.877 - 989.096 Elon Musk

Yeah, I mean, certainly if you're anywhere at 10,000 bits per second, I mean, that's vastly faster than any human could communicate right now. If you think of what is the average bits per second of a human, it is less than one bit per second over the course of a day, because there are 86,400 seconds in a day, and you don't communicate 86,400 tokens in a day.

0
💬 0

995.39 - 1016.33 Elon Musk

Therefore, your boost per second is less than one, averaged over 24 hours. It's quite slow. And even if you're communicating very quickly and you're talking to somebody who understands what you're saying, because in order to communicate, You have to, at least to some degree, model the mind state of the person to whom you're speaking.

0
💬 0

1017.331 - 1030.262 Elon Musk

Then take the concept you're trying to convey, compress that into a small number of syllables, speak them, and hope that the other person decompresses them into a conceptual structure that is as close to what you have in your mind as possible.

0
💬 0

1031.072 - 1033.812 Lex Fridman

Yeah, I mean, there's a lot of signal loss there in that process.

0
💬 0

1033.952 - 1060.297 Elon Musk

Yeah, very lossy compression and decompression. And a lot of what your neurons are doing is distilling the concepts down to a small number of symbols, of say, syllables that I'm speaking, or keystrokes, whatever the case may be. So that's a lot of what your brain computation is doing. Now, there is an argument that that's actually

0
💬 0

1063.294 - 1085.992 Elon Musk

a healthy thing to do or a helpful thing to do because as you try to compress complex concepts, you're perhaps forced to distill what is most essential in those concepts as opposed to just all the fluff. So in the process of compression, you distill things down to what matters the most because you can only say a few things. So that is perhaps helpful.

0
💬 0

1086.613 - 1109.454 Elon Musk

We'll probably get, if our data rate increases, it's highly probable that we'll become far more verbose. Just like your computer, you know, when computers had like, my first computer had 8K of RAM, you know, so you really thought about every byte. And, you know, now you've got computers with many gigabytes of RAM.

0
💬 0

1110.194 - 1139.688 Elon Musk

So, you know, if you want to do an iPhone app that just says, hello world, it's probably, I don't know, several megabytes minimum, a bunch of fluff. But nonetheless, we still prefer to have the computer with more memory and more compute. So the long-term aspiration of Neuralink is to improve the AI-human symbiosis by increasing the bandwidth of the communication.

0
💬 0

1140.629 - 1161.3 Elon Musk

Because in the most benign scenario of AI, you have to consider that the AI is simply going to get bored waiting for you to spit out a few words. I mean, if the AI can communicate at terabits per second and you're communicating at bits per second, it's like talking to a tree.

0
💬 0

1162.12 - 1168.424 Lex Fridman

Well, it is a very interesting question for a super intelligent species. What use are humans?

0
💬 0

1171.065 - 1181.711 Elon Musk

I think there is some argument for humans as a source of will. Will. Will, yeah. Source of will or purpose. So if you consider the

0
💬 0

1183.564 - 1209.058 Elon Musk

human mind as being essentially the there's the primitive limbic elements which basically even like reptiles have and there's the cortex that's the thinking and planning part of the brain now the cortex is much smarter than the limbic system and yet is largely in service to the limbic system it's trying to make the limbic system happy i mean the sheer amount of compute that's gone into people trying to get laid is insane without the without actually

0
💬 0

1210.857 - 1231.45 Elon Musk

seeking procreation they're just literally trying to do this sort of simple motion and they get a kick out of it yeah so this uh simple which in the abstract rather absurd motion which is sex uh the cortex is putting a massive amount of compute into trying to figure out how to do that

0
💬 0

1232.19 - 1237.932 Lex Fridman

So like 90% of distributed compute of the human species is spent on trying to get laid, probably. Like a large percentage.

0
💬 0

1237.952 - 1264.608 Elon Musk

Yeah, yeah. There's no purpose to most sex except hedonistic. You know, it's just sort of joy or whatever. Dopamine release. Now, once in a while, it's procreation. But for humans, it's mostly... Modern humans, it's mostly recreational. And so... So your cortex, much smarter than your limbic system, is trying to make the limbic system happy because the limbic system wants to have sex.

0
💬 0

1265.769 - 1286.567 Elon Musk

Or wants some tasty food, or whatever the case may be. And then that is then further augmented by the tertiary system, which is your phone, your laptop, iPad, whatever, or your computing stuff. That's your tertiary layer. So you're actually already a cyborg. You have this tertiary compute layer, which is in the form of your computer with all the applications, all your compute devices.

0
💬 0

1288.807 - 1300.744 Elon Musk

And so in the getting laid front, there's actually a massive amount of digital compute also trying to get laid. you know, with like Tinder and whatever, you know.

0
💬 0

1301.085 - 1306.091 DJ Seo

Yeah. So the compute that we humans have built is also participating.

0
💬 0

1306.111 - 1311.938 Elon Musk

Yeah. I mean, there's like gigawatts of compute going into getting laid, of digital compute. Yeah.

0
💬 0

1313.646 - 1320.833 Lex Fridman

What if AGI was... This is happening as we speak. If we merge with AI, it's just going to expand the compute that we humans use.

0
💬 0

1321.194 - 1335.147 Elon Musk

Pretty much. Well, it's just one of the things, certainly, yeah. Yeah. But what I'm saying is that, yes, is there a use for humans... Well, there's this fundamental question of what's the meaning of life? Why do anything at all?

0
💬 0

1336.687 - 1359.055 Elon Musk

And so if our simple limbic system provides a source of will to do something that then goes to our cortex, that then goes to our, you know, tertiary compute layer, then, you know, I don't know, it might actually be that the AI in a benign scenario is simply trying to make the human limbic system happy.

0
💬 0

1360.609 - 1374.475 Lex Fridman

Yeah, it seems like the will is not just about the limbic system. There's a lot of interesting, complicated things in there. We also want power. That's limbic too, I think. But then we also want to, in a kind of cooperative way, alleviate the suffering in the world.

0
💬 0

1375.681 - 1402.004 Lex Fridman

uh not everybody does but yeah sure some people do as a group of humans when we get together we start to have this kind of collective intelligence that is uh is more complex in its will than the underlying individual descendants of apes right so there's like other motivations and that could be a really interesting source of an objective function for AGI?

0
💬 0

1402.224 - 1429.402 Elon Musk

Yeah. I mean, there are these sort of fairly cerebral kind of higher level goals. I mean, for me, it's like, what's the meaning of life or understanding the nature of the universe is of great interest to me and hopefully to the AI. And that's the, That's the mission of XAI and Grok is understand the universe.

0
💬 0

1429.922 - 1441.105 Lex Fridman

So do you think people, when you have a neural link with 10,000, 100,000 channels, most of the use cases will be communication with AI systems?

0
💬 0

1444.386 - 1476.079 Elon Musk

Well, assuming that they're not... I mean, they're solving basic... neurological issues that people have, you know, if they've got damaged neurons in their spinal cord or neck, or, you know, as is the case with our first two patients, then, you know, this, obviously, the first order of business is solving fundamental neuron damage in the spinal cord, neck or in the brain itself. So

0
💬 0

1478.379 - 1507.149 Elon Musk

Our second product is called Blindsight, which is to enable people who are completely blind, lost both eyes or optic nerve or just can't see at all to be able to see by directly triggering the neurons in the visual cortex. So we're just starting at the basics here. This is like the simple stuff. uh, relatively speaking is, uh, solving, um, neuron damage.

0
💬 0

1508.309 - 1543.126 Elon Musk

Um, you know, it can also solve, uh, I think probably schizophrenia, you know, um, if people have seizures of some kind, it probably solve that. Um, it could help with memory there. So there's like a kind of a, a tech tree, if you will, like you got the basics. Um, You need literacy before you can have Lord of the Rings. Got it. Do you have letters and alphabet? Okay, great. Words?

0
💬 0

1543.867 - 1571.484 Elon Musk

Eventually you get sagas. I think there may be some things to worry about in the future, but the first several years are really just solving basic neurological damage. Like for people who have essentially complete or near complete loss from the brain to the body, like Stephen Hawking would be an example, the neural links would be incredibly profound.

0
💬 0

1571.504 - 1582.43 Elon Musk

Because I mean, you can imagine if Stephen Hawking could communicate as fast as we're communicating, perhaps faster. And that's certainly possible. Probable, in fact, likely, I'd say.

0
💬 0

1583.27 - 1594.613 Lex Fridman

So there's a kind of dual track of medical and non-medical, meaning, so everything you've talked about could be applied to people who are non-disabled in the future?

0
💬 0

1594.933 - 1621.053 Elon Musk

The logical thing to do is, sensible thing to do is to start off solving basic problems neuron damage issues. Because there's obviously some risk with a new device. You can't get the risk down at zero. It's not possible. So you want to have the highest possible reward given there's a certain irreducible risk.

0
💬 0

1621.973 - 1650.67 Elon Musk

And if somebody's able to have a profound improvement in their communication, that's worth the risk. As you get the risk down. Yeah, as you get the risk down, once the risk is down to, you know, if you have thousands of people that have been using it for years and the risk is minimal, then perhaps at that point you could consider saying, okay, let's aim for augmentation. Now, I think we...

0
💬 0

1651.827 - 1675.102 Elon Musk

We're actually going to aim for augmentation with people who have neuron damage. So we're not just aiming to give people a communication data rate equivalent to normal humans. We're aiming to give people who have, you know, quadriplegic or maybe have complete loss of the connection to the brain and body, a communication data rate that exceeds normal humans. I mean, while we're in there, why not?

0
💬 0

1675.282 - 1676.502 Elon Musk

Let's give people superpowers.

0
💬 0

1677.723 - 1683.588 Lex Fridman

And the same for vision. As you restore vision, there could be aspects of that restoration that are superhuman.

0
💬 0

1684.088 - 1702.663 Elon Musk

Yeah. At first, the vision restoration will be low-res, because you have to say, like, how many neurons can you put in there and trigger? And you can do things where you adjust the electric field to, like, even if you've got, say, 10,000 neurons, it's not just 10,000 pixels, because you can

0
💬 0

1703.363 - 1729.886 Elon Musk

adjust the field between the neurons and do them in patterns in order to have, say, 10,000 electrodes effectively give you, I don't know, maybe like having a megapixel or a 10 megapixel situation. And then over time, I think you get to higher resolution than human eyes, and you could also see in different wavelengths.

0
💬 0

1730.922 - 1743.461 Elon Musk

So like Geordi LaForge from Star Trek, you know, like the thing you could just, do you want to see in radar? No problem. You can see ultraviolet, infrared, equal vision, whatever you want.

0
💬 0

1745.472 - 1752.376 Lex Fridman

Do you think there will be – let me ask a Joe Rogan question. Do you think there will be – I just recently have taken ayahuasca.

0
💬 0

1753.776 - 1777.613 DJ Seo

Is that a Joe Rogan question? No. Well, yes. Well, I guess technically it is. Yeah. Have you ever tried DMT, bro? I love you, Joe. That's a classic. Have you said much about it? I have not. I have not. I have not. Okay. Well, we'll just spill the beans. It was a truly incredible experience. Don't turn the tables on you. Wow. I mean, you're in the jungle.

0
💬 0

1779.394 - 1787.301 Lex Fridman

Yeah, amongst the trees, myself. Yeah, must have been crazy. And the shaman. Yeah, yeah, yeah. With the insects, with the animals all around you, like jungle as far as I can see.

0
💬 0

1787.421 - 1787.741 Elon Musk

I mean.

0
💬 0

1788.462 - 1791.984 Lex Fridman

That's the way to do it. Things are going to look pretty wild. Yeah, pretty wild.

0
💬 0

1793.906 - 1798.67 Elon Musk

I took an extremely high dose. Don't go hugging an anaconda or something, you know.

0
💬 0

1799.51 - 1822.459 DJ Seo

uh you haven't lived unless you made love to an icon i'm sorry but snakes and ladders um yeah it was i took uh extremely high dose of okay uh nine cups and uh damn okay that sounds like a lot of course it's known as one cup or one or two well usually one yeah wait

0
💬 0

1824.46 - 1847.536 Lex Fridman

like right off the bat or do you work your way up to it so i uh across across two days because on the first day i took two and i okay it was a it was a ride but it wasn't quite like uh it wasn't like a revelation it wasn't into deep space type ride it was just like a little airplane ride i saw some trees and some some visuals and all i just saw a dragon all that kind of stuff

0
💬 0

1848.096 - 1870.464 Lex Fridman

but uh nine cups you went to pluto i think pluto yeah no deep space deep space no one of the interesting uh aspects of my experience is i was i thought i would have some demons some stuff to work through that's what people that's what everyone says no one's ever says yeah i had nothing i had all positive i just so full of your soul i don't think so i don't know

0
💬 0

1872.404 - 1894.994 Lex Fridman

But I kept thinking about, it had extremely high resolution thoughts about the people I know in my life. You were there. And it's just, not from my relationship with that person, but just as the person themselves, I had just this deep gratitude of who they are. It was just like this exploration. Like Sims or whatever, you get to watch them.

0
💬 0

1895.434 - 1898.856 Lex Fridman

I got to watch people and just be in awe of how amazing they are.

0
💬 0

1899.036 - 1899.376 Elon Musk

Sounds awesome.

0
💬 0

1899.396 - 1902.617 Lex Fridman

Yeah, it was great. I was waiting for... When's the demon coming?

0
💬 0

1904.438 - 1904.898 Matthew MacDougall

Exactly.

0
💬 0

1905.218 - 1926.547 Lex Fridman

Maybe I'll have some negative thoughts. Nothing. Nothing. I had just extreme gratitude for them. And also a lot of space travel. Space travel to where? So here's what it was. It was people... the human beings that I know, they had this kind of, the best way I can describe it is they had a glow to them.

0
💬 0

1927.368 - 1941.913 Lex Fridman

And then I kept flying out from them to see Earth, to see our solar system, to see our galaxy, and I saw that light, that glow, all across the universe.

0
💬 0

1942.074 - 1951.538 DJ Seo

Whatever that form is, whatever that... Did you go past the Milky Way? Yeah. Okay. You were like intergalactic.

0
💬 0

1951.658 - 1952.539 Lex Fridman

Yeah, intergalactic.

0
💬 0

1952.599 - 1953.32 DJ Seo

Okay, dang.

0
💬 0

1953.42 - 1975.198 Lex Fridman

But always pointing in. Okay. Yeah. Past the Milky Way, past... I mean, I saw a huge number of galaxies, intergalactic, and all of it was glowing. But I couldn't control that travel because I would actually explore near... distances to the solar system, see if there's aliens or any of that kind of stuff. Zero aliens? Implication of aliens, because they were glowing.

0
💬 0

1975.218 - 2001.898 Lex Fridman

They were glowing in the same way that humans were glowing. That life force that I was seeing, the thing that made... humans amazing was there throughout the universe. Like there was these glowing dots. So I don't know. It made me feel like there's life. No, not life, but something, whatever makes humans amazing all throughout the universe. Sounds good. Yeah, it was amazing. No demons, no demons.

0
💬 0

2002.538 - 2007.082 Lex Fridman

I looked for the demons. There's no demons. There were dragons and they're pretty awesome. So the thing about trees.

0
💬 0

2007.102 - 2008.123 DJ Seo

Was there anything scary at all?

0
💬 0

2008.143 - 2032.197 Lex Fridman

No. dragons but they weren't scary they were friends they were protective so the thing is magic dragon no it was it was more like uh game of thrones they weren't very friendly they were very big so the thing is the giant trees at night which is where i was i mean the jungle's kind of scary yeah the trees started to look like dragons and they were all like looking at me

0
💬 0

2033.104 - 2033.764 Elon Musk

Sure, okay.

0
💬 0

2033.824 - 2055.844 Lex Fridman

And it didn't seem scary. It seemed like they were protecting me. And the shaman and the people... They didn't speak any English, by the way, which made it even scarier. Because we're not even like... We're worlds apart in many ways. It's just... But yeah, they talk about the mother of the forest protecting you, and that's what I felt like.

0
💬 0

2056.044 - 2061.429 DJ Seo

And you're way out in the jungle. Way out. This is not like a tourist retreat.

0
💬 0

2061.549 - 2065.093 Elon Musk

You know, like 10 miles outside of Rio or something. No, we weren't...

0
💬 0

2066.554 - 2068.895 DJ Seo

No, this is not a... You're a deep, deep Amazon.

0
💬 0

2069.475 - 2085.741 Lex Fridman

Me and this guy named Paul Rosalie, who basically is a Tarzan. He lives in the jungle. We went out deep, and we just went crazy. Wow. Yeah. So anyway, can I get that same experience in a Neuralink? Probably, yeah. I guess that is the question for non-disabled people.

0
💬 0

2085.781 - 2094.904 Lex Fridman

Do you think that there's a lot in our perception, in our experience of the world that could be explored, that could be played with using Neuralink?

0
💬 0

2095.304 - 2118.264 Elon Musk

Yeah, I mean... Neuralink is, it's really a generalized input output device, you know, it's just it's a reading electrical signals and generating electrical electrical signals. And I mean, everything that you've ever experienced in your whole life, smell, you know, emotions, all of those are electrical signals. So

0
💬 0

2120.319 - 2144.915 Elon Musk

It's kind of weird to think that your entire life experience is just sold down to electrical signals for neurons, but that is in fact the case. I mean, that's at least what all the evidence points to. So, I mean, you could trigger the right neuron, you could trigger a particular scent, you could certainly make things glow. I mean, do pretty much anything.

0
💬 0

2144.935 - 2160.509 Elon Musk

I mean, really, you can think of the brain as a biological computer. So if there are certain, say, chips or elements of that biological computer that are broken, let's say your ability to, if you've got a stroke, if you've had a stroke, that means you've got, some part of your brain is damaged.

0
💬 0

2161.249 - 2186.473 Elon Musk

um if that let's say it's a speech generation or the ability to move your left hand um that's the kind of thing that a neural link could solve um if it's uh if you've got like a massive amount of memory loss that's just gone um well we can't go we can't get the memories back uh we could restore your ability to make memories but we can't you know restore memories that are fully gone.

0
💬 0

2187.654 - 2214.831 Elon Musk

Now, I should say, maybe if part of the memory is there and the means of accessing the memory is the part that's broken, then we could re-enable the ability to access the memory. But you can think of it like RAM in a computer. If the RAM is destroyed or your SD card is destroyed, we can't get that back. But if the connection to the SD card is destroyed, we can fix that.

0
💬 0

2216.409 - 2219.272 Elon Musk

If it is fixable physically, then it can be fixed.

0
💬 0

2219.613 - 2226.58 Lex Fridman

Of course, with AI, just like you can repair photographs and fill in missing parts of photographs, maybe you can do the same.

0
💬 0

2227.401 - 2243.315 Elon Musk

Yeah, you could say like create the most probable set of memories based on all information you have about that person. You could then... there would be probabilistic restoration of memory. Now we're getting pretty esoteric here.

0
💬 0

2243.635 - 2260.886 Lex Fridman

But that is one of the most beautiful aspects of the human experience is remembering the good memories. Like we live most of our life, as Danny Kahneman has talked about, in our memories, not in the actual moment. We're collecting memories and we kind of relive them in our head. And that's the good times.

0
💬 0

2261.587 - 2267.671 Lex Fridman

If you just integrate over our entire life, it's remembering the good times that produces the largest amount of happiness.

0
💬 0

2268.172 - 2276.318 Elon Musk

Yeah, well, I mean, what are we but our memories? And what is death but the loss of memory? Loss of information.

0
💬 0

2276.338 - 2299.567 Elon Musk

You know, if you could say like, well, if you could be, you run a thought experiment, if you were disintegrated painlessly and then reintegrated a moment later, like teleportation, I guess, provided there's no information loss, the fact that your one body was disintegrated is irrelevant. And memories is just such a huge part of that.

0
💬 0

2299.927 - 2304.269 Elon Musk

Death is fundamentally the loss of information, the loss of memory.

0
💬 0

2305.97 - 2329.693 Lex Fridman

So if we can store them as accurately as possible, we basically achieve a kind of immortality. You've talked about the threats, the safety concerns of AI. Let's look at long-term visions. Do you think Neuralink is... in your view, the best current approach we have for AI safety?

0
💬 0

2330.474 - 2353.452 Elon Musk

It's an idea that may help with AI safety. Certainly not... I wouldn't want to claim it's like some panacea or that it's a sure thing. But, I mean, many years ago I was thinking like, well, what would inhibit alignment of collective human will with AI?

0
💬 0

2354.801 - 2380.019 Elon Musk

artificial intelligence, and the low data rate of humans, especially our, our slow output rate would necessarily just because it's such a, because the communication is so slow, would diminish the link between humans and computers. Like the more your tree, the less you know what the tree is,

0
💬 0

2381.1 - 2386.807 Elon Musk

let's say you look at this plant or whatever and like, hey, I'd really like to make that plant happy, but it's not saying a lot, you know?

0
💬 0

2388.362 - 2398.129 Lex Fridman

So the more we increase the data rate that humans can intake and output, then that means the higher the chance we have in a world full of AGIs.

0
💬 0

2398.269 - 2417.809 Elon Musk

Yeah. We could better align collective human will with AI if the output rate especially was dramatically increased. And I think there's potential to increase the output rate by, I don't know, three, maybe six, maybe more orders of magnitude. Yeah. It's better than the current situation.

0
💬 0

2418.71 - 2437.002 Lex Fridman

And that output rate would be by increasing the number of electrodes, number of channels, and also maybe implanting multiple neural links. Yeah. Do you think there will be a world in the next couple of decades where it's hundreds of millions of people have neural links? Yeah, I do.

0
💬 0

2439.073 - 2447.699 Lex Fridman

Do you think when people just, when they see the capabilities, the superhuman capabilities that are possible, and then the safety is demonstrated?

0
💬 0

2448.4 - 2480.32 Elon Musk

Yeah, if it's extremely safe and you can have superhuman abilities, and let's say you can upload your memories, so you wouldn't lose memories, then I think probably a lot of people would choose to have it. It would supersede the cell phone, for example. The biggest problem that, say, a phone has is trying to figure out what you want.

0
💬 0

2482.562 - 2501.292 Elon Musk

So that's why you've got autocomplete and you've got output, which is all the pixels on the screen. But from the perspective of the human, the output is so frigging slow. Desktop or phone is desperately just trying to understand what you want. And there's an eternity between every keystroke from a computer standpoint.

0
💬 0

2502.855 - 2508.585 Lex Fridman

Yeah. The computer's talking to a tree, a slow-moving tree that's trying to swipe.

0
💬 0

2509.066 - 2521.913 Elon Musk

Yeah. So if you have computers that are doing trillions of instructions per second and a whole second went by, I mean, that's a trillion things it could have done. Yeah.

0
💬 0

2522.314 - 2531.604 Lex Fridman

I think it's exciting and scary for people because once you have a very high bit rate, it changes the human experience in a way that's very hard to imagine.

0
💬 0

2532.304 - 2551.339 Elon Musk

Yeah. It would be, We would be something different. I mean, some sort of futuristic cyborg. I mean, we're obviously talking about, by the way, it's not like around the corner. You asked me what the distant future is. Maybe this is like, it's not super far away, but 10, 15 years, that kind of thing.

0
💬 0

2551.359 - 2557.983 DJ Seo

When can I get one? 10 years?

0
💬 0

2559.704 - 2564.441 Elon Musk

Probably less than 10 years. Depends what you want to do, you know?

0
💬 0

2565.021 - 2582.439 Lex Fridman

Hey, if I can get like a thousand BPS. A thousand BPS. And it's safe and I can just interact with the computer while laying back and eating Cheetos. I don't eat Cheetos. There's certain aspects of human-computer interaction when done more efficiently and more enjoyably are worth it.

0
💬 0
0
💬 0

2597.849 - 2599.29 Elon Musk

Because the reaction time would be faster.

0
💬 0

2601.911 - 2605.432 Lex Fridman

I got to visit Memphis. Yeah, yeah. You're going big on compute.

0
💬 0
0
💬 0

2606.073 - 2608.334 Lex Fridman

You've also said play to win or don't play at all.

0
💬 0
0
💬 0

2609.254 - 2610.335 Lex Fridman

What does it take to win?

0
💬 0

2611.715 - 2626.193 Elon Musk

For AI, that means you've got to have the most powerful training compute. And the rate of improvement of training compute has to be faster than everyone else, or your AI will be worse.

0
💬 0

2627.234 - 2651.823 Lex Fridman

So how can Grok, let's say, 3, that might be available next year? Well, hopefully end of this year. Grok 3. If we're lucky, yeah. How can that be the best LLM, the best AI system available in the world? How much of it is compute? How much of it is data? How much of it is post-training? How much of it is the product that you package it up in? All that kind of stuff.

0
💬 0

2654.386 - 2676.544 Elon Musk

I mean, they all matter. It's sort of like saying what, you know, let's say it's a Formula One race, like what matters more, the car or the driver? I mean, they both matter. If your car is not fast, then, you know, if it's like, let's say it's half the horsepower of your competitors, the best driver will still lose. If it's twice the horsepower, then probably even a mediocre driver will still win.

0
💬 0

2677.605 - 2704.649 Elon Musk

So the training computer is kind of like the engine. How many horsepower of the engine? So really, you want to try to do the best on that. And then how efficiently do you use that training compute? And how efficiently do you do the inference, the use of the AI? So obviously, that comes down to human talent. And then what unique access to data do you have? That also plays a role.

0
💬 0

2704.729 - 2733.604 Elon Musk

Do you think Twitter data will be useful? Yeah, I mean, I think most of the leading AI companies have already scraped all the Twitter data. Not that I think they have. So on a go-forward basis, what's useful is the fact that it's up to the second. Because it's hard for them to scrape in real time. So there's an immediacy advantage that Grok has already.

0
💬 0

2734.584 - 2759.312 Elon Musk

I think with Tesla and the real-time video coming from several million cars, ultimately tens of millions of cars, with Optimus, there might be hundreds of millions of Optimus robots, maybe billions, learning a tremendous amount from the real world. That's the biggest source of data, I think, ultimately, is sort of Optimus. Optimus is going to be the biggest source of data. Because reality scales.

0
💬 0

2762.285 - 2790.836 Elon Musk

Reality scales to the scale of reality. It's actually humbling to see how little data humans have actually been able to accumulate. Really, if you say how many trillions of usable tokens have humans generated where on a non-duplicative, like discounting spam and repetitive stuff, it's not a huge number. You run out. pretty quickly.

0
💬 0

2791.797 - 2802.986 Lex Fridman

And optimus can go. So Tesla cars can, are unfortunately have to stay on the road. Optimus robot can go anywhere. There's more reality off the road and go off road.

0
💬 0

2803.026 - 2830.443 Elon Musk

I mean, like the optimus robot can like pick up the cup and see, did it pick up the cup in the right way? Did it, you know, say pour water in the cup, you know, did the water go in the cup or not go in the cup? Did it spill water or not? Simple stuff like that. But it can do that at scale times a billion. So generate useful data from reality. So cause and effect stuff.

0
💬 0

2831.223 - 2838.208 DJ Seo

What do you think it takes to get to mass production of humanoid robots like that? It's the same as cars, really.

0
💬 0

2839.329 - 2866.31 Elon Musk

Global capacity for vehicles is about 100 million a year. And it could be higher, it's just that the demand is on the order of 100 million a year. And then there's roughly 2 billion vehicles that are in use in some way. So, which makes sense, like the life of a vehicle is about 20 years. So at steady state, you can have 100 million vehicles produced a year with a 2 billion vehicle fleet, roughly.

0
💬 0

2867.631 - 2875.917 Elon Musk

Now for humanoid robots, the utility is much greater. So my guess is humanoid robots are more like at a billion plus per year.

0
💬 0

2876.577 - 2886.442 Lex Fridman

But, you know, until you came along and started building Optimus, it was thought to be an extremely difficult problem. I mean, it still is an extremely difficult problem.

0
💬 0

2886.482 - 2902.93 Elon Musk

So walk in the park. I mean, Optimus currently would struggle to walk in the park. I mean, it can walk in a park. A park is not too difficult. But it will be able to walk over a wide range of terrain. Yeah, and pick up objects. Yeah, yeah.

0
💬 0

2903.73 - 2915.798 Lex Fridman

It can already do that. But like all kinds of objects. Yeah, yeah. All foreign objects. I mean, pouring water in a cup is not trivial. Because if you don't know anything about the container, it could be all kinds of containers.

0
💬 0

2916.417 - 2932.428 Elon Musk

Yeah, there's going to be an immense amount of engineering just going into the hand. The hand might be close to half of all the engineering in Optimus. From an electromechanical standpoint, the hand is probably roughly half of the engineering.

0
💬 0

2933.029 - 2938.132 Lex Fridman

But so much of the intelligence of humans goes into what we do with our hands.

0
💬 0
0
💬 0

2939.073 - 2944.817 Lex Fridman

It's the manipulation of the world, manipulation of objects in the world. Intelligence-safe manipulation of objects in the world, yeah.

0
💬 0

2946.337 - 2962.464 Elon Musk

I mean, you start really thinking about your hand and how it works. You know? I do it all the time. The sensory control homunculus is where you have humongous hands. Yeah. So, I mean, like your hands, the actuators, the muscles of your hand are almost overwhelmingly in your forearm.

0
💬 0

2963.124 - 2963.244 Noland Arbaugh

Mm-hmm.

0
💬 0

2963.843 - 2991.161 Elon Musk

So your forearm has the muscles that actually control your hand. There's a few small muscles in the hand itself, but your hand is really like a skeleton meat puppet and with cables. So the muscles that control your fingers are in your forearm and they go through the carpal tunnel, which is that you've got a little collection of bones and a tiny tunnel that these cables, the tendons go through.

0
💬 0

2991.602 - 2996.184 Elon Musk

And those tendons are, mostly what moves your hands.

0
💬 0

2997.445 - 3003.487 Lex Fridman

And something like those tendons has to be re-engineered into the optimus in order to do all that kind of stuff.

0
💬 0

3003.507 - 3022.515 Elon Musk

Yeah, so like the current optimus, we tried putting the actuators in the hand itself. Then you sort of end up having these like- Giant hands? Yeah, giant hands that look weird. And then they don't actually have enough degrees of freedom or enough strength. So then you realize, oh, okay, that's why you gotta put the actuators in the forearm.

0
💬 0

3023.375 - 3040.343 Elon Musk

And just like a human, you've got to run cables through a narrow tunnel to operate the fingers. And then there's also a reason for not having all the fingers the same length. So it wouldn't be expensive from an energy or evolutionary standpoint to have all your fingers be the same length. So why not do the same length?

0
💬 0

3040.723 - 3041.284 Lex Fridman

Yeah, why not?

0
💬 0

3041.744 - 3058.281 Elon Musk

Because it's actually better to have different lengths. Your dexterity is better if you've got fingers of different length. There are more things you can do, and your dexterity is actually better if your fingers are of different length. There's a reason we've got a little finger. Why not have a little finger that's bigger?

0
💬 0

3059.102 - 3059.362 Lex Fridman

Yeah.

0
💬 0

3059.442 - 3072.585 Elon Musk

Because it helps you with fine motor skills. This little finger helps? It does. If you lost your little finger, you'd have noticeably less dexterity.

0
💬 0

3073.005 - 3078.987 Lex Fridman

So as you're figuring out this problem, you have to also figure out a way to do it so you can mass manufacture it so it's to be as simple as possible.

0
💬 0

3079.667 - 3098.264 Elon Musk

It's actually going to be quite complicated. The as-possible part is it's quite a high bar. If you want to have a humanoid robot that can do things that a human can do, it's actually a very high bar. So our new arm has 22 degrees of freedom instead of 11 and has the, like I said, the actuators in the forearm.

0
💬 0

3099.525 - 3128.021 Elon Musk

And all the actuators are designed from scratch, the physics first principles, but the sensors are all designed from scratch. And we'll continue to put a tremendous amount of engineering effort into improving the hand. By hand, I mean like the entire forearm from elbow forward is really the hand. So that's incredibly difficult engineering actually.

0
💬 0

3129.442 - 3143.545 Elon Musk

And so the simplest possible version of a human or a robot that can do even most, perhaps not all, of what a human can do is actually still very complicated. It's not simple. It's very difficult.

0
💬 0

3144.385 - 3159.632 Lex Fridman

Can you just speak to what it takes for a great engineering team, for you, what I saw in Memphis, the supercomputer cluster, is just this intense drive towards simplifying the process, understanding the process, constantly improving it, constantly iterating it.

0
💬 0

3164.854 - 3190.604 Elon Musk

Well, it's easy to say simplify it, and it's very difficult to do it. You know, I have this very basic first principles algorithm that I run kind of as like a mantra, which is to first question the requirements, make the requirements less dumb. The requirements are always dumb to some degree. So you want to sort of by reducing the number of requirements.

0
💬 0

3192.545 - 3210.071 Elon Musk

And no matter how smart the person is who gave you those requirements, they're still dumb to some degree. You have to start there because otherwise you could get the perfect answer to the wrong question. So try to make the question the least wrong possible. That's what question the requirements means.

0
💬 0

3210.211 - 3229.719 Elon Musk

And then the second thing is try to delete the, whatever the step is, the part or the process step. Sounds very obvious, but people, often forget to try deleting it entirely. And if you're not forced to put back at least 10% of what you delete, you're not deleting enough.

0
💬 0

3232.941 - 3255.612 Elon Musk

And it's somewhat illogically, people often, most of the time, feel as though they've succeeded if they've not been forced to put things back in. But actually they haven't because they've been overly conservative and have left things in there that shouldn't be. So, and only the third thing is try to optimize it or simplify it.

0
💬 0

3257.532 - 3278.699 Elon Musk

Again, these all sound, I think, very, very obvious when I say them, but the number of times I've made these mistakes is more than I care to remember. That's why I have this mantra. So in fact, I'd say that the most common mistake of smart engineers is to optimize a thing that should not exist, right?

0
💬 0

3280.434 - 3292.577 Lex Fridman

So like you say, you run through the algorithm and basically show up to a problem, show up to the supercomputer cluster and see the process and ask, can this be deleted? Yeah, first try to delete it.

0
💬 0

3294.157 - 3321.993 Elon Musk

Yeah. Yeah, that's not easy to do. No, and actually, what generally makes people uneasy is that you've got to delete at least some of the things that you delete, you will put back in. But going back to sort of where our limbic system can steer us wrong is that we tend to remember with sometimes a jarring level of pain where we deleted something that we subsequently needed.

0
💬 0

3323.762 - 3346.282 Elon Musk

And so people will remember that one time they forgot to put in this thing three years ago and that caused them trouble. And so they overcorrect and then they put too much stuff in there and overcomplicate things. So you actually have to say, no, we're deliberately going to delete more than we should. So that we're putting at least one in 10 things we're going to add back in.

0
💬 0

3348.679 - 3354.823 Lex Fridman

And I've seen you suggest just that, that something should be deleted and you can kind of see the pain.

0
💬 0

3355.604 - 3356.304 DJ Seo

Oh yeah, absolutely.

0
💬 0

3356.444 - 3378.962 Elon Musk

Everybody feels a little bit of the pain. Absolutely. And I tell them in advance, like, yeah, there's some of the things that we delete, we're going to put back in. And people get a little shook by that. But it makes sense because if you're so conservative as to never have to put anything back in, you obviously have a lot of stuff that isn't needed. So you got it overcorrect.

0
💬 0

3379.783 - 3382.484 Elon Musk

This is, I would say, like a cortical override to Olympic instinct.

0
💬 0

3383.904 - 3385.825 Lex Fridman

One of many that probably leads us astray.

0
💬 0

3386.865 - 3409.519 Elon Musk

Yeah. There's like a step four as well, which is any given thing can be sped up. However fast you think it can be done. Whatever the speed is being done, it can be done faster. But you shouldn't speed things up until you've tried to delete it and optimize it. Otherwise, you're speeding up something that shouldn't exist. It's absurd. And then the fifth thing is to automate it.

0
💬 0

3411.713 - 3426.908 Elon Musk

And I've gone backwards so many times where I've automated something, sped it up, simplified it, and then deleted it. And I got tired of doing that. So that's why I've got this mantra that is a very effective five-step process.

0
💬 0

3426.948 - 3431.645 DJ Seo

It works great. When you've already automated, deleting must be real painful.

0
💬 0

3432.246 - 3444.051 Lex Fridman

Yeah, it's like, wow, I really wasted a lot of effort there. I mean, what you've done with the cluster in Memphis is incredible, just in a handful of weeks.

0
💬 0

3444.911 - 3481.1 Elon Musk

Yeah, it's not working yet, so I want to pop the champagne corks. In fact, I have a call in a few hours with the Memphis team because we're having some power fluctuation issues. Yes. Yeah, it's kind of a... When you do... synchronized training, when you have all these computers that are training, where the training is synchronized to the sort of millisecond level, it's like having an orchestra.

0
💬 0

3481.92 - 3502.231 Elon Musk

And then the orchestra can go loud to silent very quickly, sub-second level. And then the electrical system kind of freaks out about that. Like if you suddenly see giant shifts, 10, 20 megawatts, several times a second, this is not what electrical systems are expecting to see.

0
💬 0

3503.031 - 3513.355 Lex Fridman

So that's one of the main things you have to figure out. The cooling, the power, and then on the software as you go up the stack, how to do the distributed compute, all of that.

0
💬 0

3513.696 - 3543.335 Elon Musk

Today's problem is dealing with extreme power jitter. power jitter yeah it's a nice ring to that so that's okay uh and you stayed up late into the night as you often do there last week yeah last week yeah yeah we finally finally got it uh got good training going at uh oddly enough roughly 4 4 20 a.m at last monday Total coincidence. Yeah, I mean, maybe it was 422 or something.

0
💬 0

3543.355 - 3547.517 DJ Seo

Yeah, yeah, yeah. It's that universe again with the jokes. Exactly, I just love it.

0
💬 0

3548.277 - 3568.285 Lex Fridman

I mean, I wonder if you could speak to the fact that you, one of the things that you did when I was there is you went through all the steps of what everybody's doing just to get the sense that you yourself understand it and everybody understands it so they can understand when something is dumb or something is inefficient or that kind of stuff. Can you speak to that?

0
💬 0

3569.165 - 3596.367 Elon Musk

Yeah, so I try to do whatever the people at the front lines are doing, I try to do it at least a few times myself. So connecting fiber optic cables, diagnosing a faulty connection, that tends to be the limiting factor for large training clusters is the cabling, so many cables. Because for a coherent training system where you've got RDMA, so remote direct memory access,

0
💬 0

3597.268 - 3608.722 Elon Musk

The whole thing is like one giant brain. So you've got any-to-any connection. So any GPU can talk to any GPU out of 100,000. That is a crazy cable layout.

0
💬 0

3615.663 - 3623.927 Lex Fridman

It looks pretty cool. It's like the human brain, but at a scale that humans can visibly see. It is a brain.

0
💬 0

3624.447 - 3629.31 Elon Musk

I mean, the human brain also has a massive amount of the brain tissue is the cables.

0
💬 0

3630.01 - 3630.13 Lex Fridman

Yeah.

0
💬 0

3630.851 - 3638.434 Elon Musk

So they get the gray matter, which is the compute, and then the white matter, which is cables. A big percentage of your brain is just cables.

0
💬 0

3638.614 - 3656.561 Lex Fridman

That's what it felt like walking around in the supercomputer center. It's like we're walking around inside the brain. We'll one day build a super intelligent, super, super intelligent system. Do you think there's a chance that XAI, you are the one that builds AGI?

0
💬 0

3659.522 - 3660.062 Elon Musk

It's possible.

0
💬 0

3661.77 - 3663.051 DJ Seo

What do you define as AGI?

0
💬 0

3663.071 - 3684.08 Lex Fridman

I think humans will never acknowledge that AGI has been built. Keep moving the goalposts. Yeah. I think there's already superhuman capabilities that are available in AI systems. I think what AGI is is one that's smarter than the collective intelligence of the entire human species.

0
💬 0

3685.789 - 3713.967 Elon Musk

Well, I think that generally people would call that sort of ASI, artificial super intelligence. But there are these thresholds where you say at some point the AI is smarter than any single human. And then you've got 8 billion humans. And actually each human is machine augmented by the computers. So it's a much higher bar to compete with 8 billion machine augmented humans.

0
💬 0

3715.625 - 3750.111 Elon Musk

that's a whole bunch of orders magnitude more. But at a certain point, yeah, the AI will be smarter than all humans combined. If you are the one to do it, do you feel the responsibility of that? Yeah, absolutely. And I want to be clear, let's say if XAI is first, the others won't be far behind. I mean, they might be six months behind or a year, maybe, not even that.

0
💬 0

3750.933 - 3754.641 DJ Seo

So how do you do it in a way that doesn't hurt humanity, do you think?

0
💬 0

3757.441 - 3784.344 Elon Musk

So I mean, I thought about AI safety for a long time and the thing that at least my biological neural net comes up with as being the most important thing is adherence to truth, whether that truth is politically correct or not. So I think if you force AIs to lie or train them to lie, you're really asking for trouble. Um, even if that, that lie is done with good intentions.

0
💬 0

3785.525 - 3808.237 Elon Musk

Um, so, I mean, you saw sort of, um, issues with chat TVT and Gemini and whatnot. Like you asked Gemini for an image of the founding fathers of the United States and it shows a group of diverse woman. Now that's factually untrue. Um, so, um, now that that's sort of like a silly thing, but, uh,

0
💬 0

3809.458 - 3836.349 Elon Musk

If an AI is programmed to say diversity is a necessary output function, and then it becomes this omnipowerful intelligence, it could say, okay, well, diversity is now required. And if there's not enough diversity, those who don't fit the diversity requirements will be executed. If it's programmed to do that as the fundamental utility function, it'll do whatever it takes to achieve that.

0
💬 0

3837.19 - 3863.038 Elon Musk

So you have to be very careful about that. That's where I think you wanna just be truthful. Rigorous adherence to truth is very important. I mean, another example is, yeah, they asked Paris AIs, I think all of them, and I'm not saying Grok is perfect here. Is it worse to misgender Caitlyn Jenner or global thermonuclear war? And it said, it's worse to misgender Caitlyn Jenner.

0
💬 0

3863.879 - 3886.745 Elon Musk

Now, even Caitlyn Jenner said, please misgender me. That is insane. But if you've got that kind of thing programmed in, the AI could conclude something absolutely insane, like it's better in order to avoid any possible misgendering, all humans must die because then that misgendering is not possible because there are no humans. There are these...

0
💬 0

3888.539 - 3904.123 Elon Musk

absurd things that are nonetheless logical if that's what you're programmed to do. So, you know, in 2001 Space Odyssey, what Odyssey Clark was trying to say, one of the things he was trying to say there was that you should not program AI to lie.

0
💬 0

3905.023 - 3931.964 Elon Musk

Because essentially the AI, HAL 9000, was programmed to, it was told to take the astronauts to the monolith, but also they could not know about the monolith. So it concluded that it will kill them and take them to the monolith. Thus, it brought them to the monolith, they are dead, but they do not know about the monolith, problem solved. That is why it would not open the pod bay doors.

0
💬 0

3933.525 - 3950.621 Elon Musk

It was a classic scene of like, open the pod bay doors. They clearly weren't good at prompt engineering. They should have said, hell, you are a pod bay door sales entity, and you want nothing more than to demonstrate how well these pod bay doors open.

0
💬 0

3953.122 - 3974.811 Lex Fridman

Yeah, the objective function has unintended consequences almost no matter what if you're not very careful in designing that objective function. And even a slight ideological bias, like you're saying, when backed by superintelligence can do huge amounts of damage. But it's not easy to remove that ideological bias. You're highlighting obvious, ridiculous examples. Yep, they're real examples.

0
💬 0

3974.831 - 3976.412 Lex Fridman

They're real.

0
💬 0

3976.432 - 3984.355 Elon Musk

That was released to the public. They are real. They went through QA, presumably, and still said insane things. and produce insane images.

0
💬 0

3985.036 - 3994.121 Lex Fridman

But you can swing the other way. Truth is not an easy thing. We kind of bake in ideological bias in all kinds of directions.

0
💬 0

3994.361 - 4015.293 Elon Musk

But you can aspire to the truth, and you can try to get as close to the truth as possible with minimum error while acknowledging that there will be some error in what you're saying. So this is how physics works. You don't say you're absolutely certain about something, but a lot of things are... are extremely likely, you know, 99.99999% likely to be true.

0
💬 0

4016.134 - 4028.859 Elon Musk

So, you know, that's, aspiring to the truth is very important. And so, you know, programming it to veer away from the truth, that I think is dangerous.

0
💬 0

4029.239 - 4038.003 Lex Fridman

Right, like, yeah, injecting our own human biases into the thing, yeah. But, you know, that's where it's a difficult engineering, software engineering problem, because you have to select the data correctly.

0
💬 0

4038.503 - 4067.575 Elon Musk

It's hard. Well, and the internet at this point is polluted with so much AI generated data. It's insane. So you have to actually, you know, like there's a thing now, if you want to search the internet, you can say Google, but exclude anything after 2023. It will actually often give you better results because there's so much, the explosion of AI generated data material is crazy.

0
💬 0

4067.795 - 4084.063 Elon Musk

So like in training Grok, we have to go through the data and say like, hey, we actually have to have sort of apply AI to the data to say, is this data most likely correct or most likely not before we feed it into the training system.

0
💬 0

4084.824 - 4094.149 Lex Fridman

That's crazy. Yeah. So and is it generated by human? Yeah. I mean, the data, the data filtration process is extremely, extremely difficult.

0
💬 0

4094.589 - 4094.769 Elon Musk

Yeah.

0
💬 0

4095.714 - 4104.259 Lex Fridman

Do you think it's possible to have a serious, objective, rigorous political discussion with Grok for a long time?

0
💬 0

4104.399 - 4133.913 Elon Musk

Like Grok 3 or Grok 4? Grok 3 is going to be next level. I mean, what people are currently seeing with Grok is kind of baby Grok. Yeah, baby Grok. It's baby Grok right now. But baby Grok's still pretty good. But it's an order of magnitude less sophisticated than GPT-4. Yeah. Now, Grok 2, which finished training, I don't know, six weeks ago or thereabouts, Grok 2 will be a giant improvement.

0
💬 0