George Hotz
👤 PersonPodcast Appearances
But that obviously is not at all how Skyrim was actually created. It was created by a bunch of programmers in a room, right? So, like, you know, it struck me one day how just silly atheism is. Like, of course we were created by God.
But that obviously is not at all how Skyrim was actually created. It was created by a bunch of programmers in a room, right? So, like, you know, it struck me one day how just silly atheism is. Like, of course we were created by God.
It's the most obvious thing.
It's the most obvious thing.
Yeah. And then like, I also just like, I like that notion. That notion gives me a lot of, I mean, I guess you can talk about what it gives a lot of religious people. It's kind of like, it just gives me comfort. It's like, you know what? If we mess it all up and we die out. Yeah.
Yeah. And then like, I also just like, I like that notion. That notion gives me a lot of, I mean, I guess you can talk about what it gives a lot of religious people. It's kind of like, it just gives me comfort. It's like, you know what? If we mess it all up and we die out. Yeah.
You know, people will come up with, like, well, yeah, but, like, man, who created God?
You know, people will come up with, like, well, yeah, but, like, man, who created God?
I'm like, that's God's problem. You know? Like, I'm not going to think this is. You're asking me if God believes in God?
I'm like, that's God's problem. You know? Like, I'm not going to think this is. You're asking me if God believes in God?
I mean, to be fair, if God didn't believe in God, he'd be as silly as the atheists here.
I mean, to be fair, if God didn't believe in God, he'd be as silly as the atheists here.
Is mid good or bad? Mid is bad. It's like mid, it's like.
Is mid good or bad? Mid is bad. It's like mid, it's like.
I have not played Diablo 4.
I have not played Diablo 4.
All right.
All right.
I'm going to say World of Warcraft. And it's not that the game is such a great game. It's not. It's that I remember in 2005 when it came out, how it opened my mind to ideas. It opened my mind to this whole world we've created, right? And there's almost been nothing like it since 2005. Like, you can look at MMOs today, and I think they all have lower user bases than World of Warcraft.
I'm going to say World of Warcraft. And it's not that the game is such a great game. It's not. It's that I remember in 2005 when it came out, how it opened my mind to ideas. It opened my mind to this whole world we've created, right? And there's almost been nothing like it since 2005. Like, you can look at MMOs today, and I think they all have lower user bases than World of Warcraft.
Like, EVE Online's kind of cool. But to think that, like, everyone knows, you know, people are always, like, they look at the Apple headset, like... What do people want in this VR? Everyone knows what they want. I want Ready Player One. And like that. So I'm going to say World of Warcraft. And I'm hoping that games can get out of this whole mobile gaming dopamine pump thing.
Like, EVE Online's kind of cool. But to think that, like, everyone knows, you know, people are always, like, they look at the Apple headset, like... What do people want in this VR? Everyone knows what they want. I want Ready Player One. And like that. So I'm going to say World of Warcraft. And I'm hoping that games can get out of this whole mobile gaming dopamine pump thing.
Yeah, and I think it'll come back. I believe.
Yeah, and I think it'll come back. I believe.
They exist in real life, too.
They exist in real life, too.
I wish it was that cool.
I wish it was that cool.
It's like middle of the curve. There's that intelligence curve. You have the dumb guy, the smart guy, and then the mid guy. Actually, being the mid guy is the worst. The smart guy is like, I put all my money in Bitcoin.
It's like middle of the curve. There's that intelligence curve. You have the dumb guy, the smart guy, and then the mid guy. Actually, being the mid guy is the worst. The smart guy is like, I put all my money in Bitcoin.
What I'm really excited about in games is like once we start getting intelligent AIs to interact with.
What I'm really excited about in games is like once we start getting intelligent AIs to interact with.
Like the NPCs in games have never been.
Like the NPCs in games have never been.
In like, yeah, in like every way. Like when you're actually building a world and a world imbued with intelligence. Oh yeah. Right. And it's just hard. Like there's just like, like, you know, running world of Warcraft, like you're limited by what you're running on a Pentium four, you know, how much intelligence can you run? How many flops did you have? Right.
In like, yeah, in like every way. Like when you're actually building a world and a world imbued with intelligence. Oh yeah. Right. And it's just hard. Like there's just like, like, you know, running world of Warcraft, like you're limited by what you're running on a Pentium four, you know, how much intelligence can you run? How many flops did you have? Right.
But now when I'm running a game on a hundred pay to flop machine, that's five people. I'm trying to make this a thing. 20 petaflops of compute is one person of compute. I'm trying to make that a unit.
But now when I'm running a game on a hundred pay to flop machine, that's five people. I'm trying to make this a thing. 20 petaflops of compute is one person of compute. I'm trying to make that a unit.
It's like a horsepower. What's a horsepower? It's how powerful a horse is. What's a person of compute?
It's like a horsepower. What's a horsepower? It's how powerful a horse is. What's a person of compute?
You know what? Border Quest 2. I put it on and I can't believe the first thing they show me is a bunch of scrolling clouds and a Facebook login screen. You had the ability to bring me into a world. And what did you give me? A pop-up, right? And this is why you're not cool, Mark Zuckerberg. But you could be cool.
You know what? Border Quest 2. I put it on and I can't believe the first thing they show me is a bunch of scrolling clouds and a Facebook login screen. You had the ability to bring me into a world. And what did you give me? A pop-up, right? And this is why you're not cool, Mark Zuckerberg. But you could be cool.
Just make sure on the Quest 3, you don't put me into clouds and a Facebook login screen. Bring me to a world.
Just make sure on the Quest 3, you don't put me into clouds and a Facebook login screen. Bring me to a world.
I got to play that from the beginning. I played it for like an hour at a friend's house.
I got to play that from the beginning. I played it for like an hour at a friend's house.
The mid guy is like, you can't put money in Bitcoin. It's not real money.
The mid guy is like, you can't put money in Bitcoin. It's not real money.
I'm going to go buy a Switch. I'm going to go today and buy a Switch.
I'm going to go buy a Switch. I'm going to go today and buy a Switch.
Is it pass-through or cameras?
Is it pass-through or cameras?
The Apple one, is that one pass-through or cameras?
The Apple one, is that one pass-through or cameras?
Some point. Maybe not January.
Some point. Maybe not January.
Maybe that's my optimism. But Apple, I will buy it. I don't care if it's expensive and does nothing. I will buy it. I will support this future endeavor.
Maybe that's my optimism. But Apple, I will buy it. I don't care if it's expensive and does nothing. I will buy it. I will support this future endeavor.
You know what? And this is another place we'll give some more respect to Mark Zuckerberg. The two companies that have endured through technology are Apple and Microsoft. And what do they make? Computers and business services.
You know what? And this is another place we'll give some more respect to Mark Zuckerberg. The two companies that have endured through technology are Apple and Microsoft. And what do they make? Computers and business services.
All the memes, social ads, they all come and go.
All the memes, social ads, they all come and go.
But you want to endure, build hardware.
But you want to endure, build hardware.
And that's why it's more important than ever that the AI is running on those systems are aligned with you. Oh, yeah. They're going to augment your entire world. Oh, yeah.
And that's why it's more important than ever that the AI is running on those systems are aligned with you. Oh, yeah. They're going to augment your entire world. Oh, yeah.
There's two directions the AI girlfriend company can take, right? There's like the highbrow, something like her, maybe something you kind of talk to. And this is, and then there's the lowbrow version of it where I want to set up a brothel in Times Square.
There's two directions the AI girlfriend company can take, right? There's like the highbrow, something like her, maybe something you kind of talk to. And this is, and then there's the lowbrow version of it where I want to set up a brothel in Times Square.
Yeah. It's not cheating if it's a robot. It's a VR experience.
Yeah. It's not cheating if it's a robot. It's a VR experience.
No, I don't want to do that one or that one.
No, I don't want to do that one or that one.
We'll see what the technology goes.
We'll see what the technology goes.
There's a lot to do in company number two. I'm just like, I'm talking about company number three now.
There's a lot to do in company number two. I'm just like, I'm talking about company number three now.
None of that tech exists yet. There's a lot to do in company number two. Company number two is going to be the great struggle of the next six years. And of the next six years, how centralized is compute going to be? The less centralized compute is going to be, the better of a chance we all have.
None of that tech exists yet. There's a lot to do in company number two. Company number two is going to be the great struggle of the next six years. And of the next six years, how centralized is compute going to be? The less centralized compute is going to be, the better of a chance we all have.
We have to. We have to, or they will just completely dominate us. I showed a picture on stream of a man in a chicken farm. You ever seen one of those factory farm chicken farms? Why does he dominate all the chickens? Why does he- Smarter. He's smarter, right? Some people on Twitch were like, he's bigger than the chickens. Yeah. And now here's a man in a cow farm. Right?
We have to. We have to, or they will just completely dominate us. I showed a picture on stream of a man in a chicken farm. You ever seen one of those factory farm chicken farms? Why does he dominate all the chickens? Why does he- Smarter. He's smarter, right? Some people on Twitch were like, he's bigger than the chickens. Yeah. And now here's a man in a cow farm. Right?
So it has nothing to do with their size and everything to do with their intelligence. And if one central organization has all the intelligence, you'll be the chickens and they'll be the chicken man. But if we all have the intelligence, we're all the chickens. We're not all the man, we're all the chickens.
So it has nothing to do with their size and everything to do with their intelligence. And if one central organization has all the intelligence, you'll be the chickens and they'll be the chicken man. But if we all have the intelligence, we're all the chickens. We're not all the man, we're all the chickens.
And there's no chicken man.
And there's no chicken man.
He was having a good life, man.
He was having a good life, man.
I want to make sure it's good. I want to make sure that like the thing that I deliver is like not going to be like a quest to which you buy and use twice. I mean, it's better than a quest, which you bought and used less than once statistically.
I want to make sure it's good. I want to make sure that like the thing that I deliver is like not going to be like a quest to which you buy and use twice. I mean, it's better than a quest, which you bought and used less than once statistically.
I think that we're going to get super scary memes once the AIs actually are superhuman.
I think that we're going to get super scary memes once the AIs actually are superhuman.
The longest time at Comma, I asked, why did I start a company? Why did I do this? What else was I going to do?
The longest time at Comma, I asked, why did I start a company? Why did I do this? What else was I going to do?
With Kama, it really started as an ego battle with Elon. I wanted to beat him. I saw a worthy adversary. Here's a worthy adversary who I can beat at self-driving cars. And I think we've kept pace, and I think he's kept ahead. I think that's what's ended up happening there. But I do think Kama is... I mean, Kama's profitable. Like... And like when this drive GPT stuff starts working, that's it.
With Kama, it really started as an ego battle with Elon. I wanted to beat him. I saw a worthy adversary. Here's a worthy adversary who I can beat at self-driving cars. And I think we've kept pace, and I think he's kept ahead. I think that's what's ended up happening there. But I do think Kama is... I mean, Kama's profitable. Like... And like when this drive GPT stuff starts working, that's it.
There's no more like bugs in the loss function. Like right now we're using like a hand-coded simulator. There's no more bugs. This is going to be it. Like this is the run up to driving.
There's no more like bugs in the loss function. Like right now we're using like a hand-coded simulator. There's no more bugs. This is going to be it. Like this is the run up to driving.
It's so, it's better than FSD and Autopilot in certain ways. It has a lot more to do with which feel you like. We lowered the price on the hardware to $1499. You know how hard it is to ship reliable consumer electronics that go on your windshield? We're doing more than most cell phone companies.
It's so, it's better than FSD and Autopilot in certain ways. It has a lot more to do with which feel you like. We lowered the price on the hardware to $1499. You know how hard it is to ship reliable consumer electronics that go on your windshield? We're doing more than most cell phone companies.
I know. I have an SMT line. I make all the boards in-house in San Diego.
I know. I have an SMT line. I make all the boards in-house in San Diego.
Our head of open pilot is great at like, you know, okay, I want all the commentaries to be identical. Yeah. And yeah, I mean, you know, look, it's $14.99. 30-day money back guarantee. It will blow your mind at what it can do. Is it hard to scale? You know what? There's kind of downsides to scaling it. People are always like, why don't you advertise?
Our head of open pilot is great at like, you know, okay, I want all the commentaries to be identical. Yeah. And yeah, I mean, you know, look, it's $14.99. 30-day money back guarantee. It will blow your mind at what it can do. Is it hard to scale? You know what? There's kind of downsides to scaling it. People are always like, why don't you advertise?
I think it's worse than that. So Infinite Jest, it's introduced in the first 50 pages, is about a tape that once you watch it once, you only ever want to watch that tape. In fact, you want to watch the tape so much that someone says, okay, here's a hacksaw, cut off your pinky, and then I'll let you watch the tape again.
I think it's worse than that. So Infinite Jest, it's introduced in the first 50 pages, is about a tape that once you watch it once, you only ever want to watch that tape. In fact, you want to watch the tape so much that someone says, okay, here's a hacksaw, cut off your pinky, and then I'll let you watch the tape again.
Our mission is to solve self-driving cars while delivering shipable intermediaries. Our mission has nothing to do with selling a million boxes. It's tawdry.
Our mission is to solve self-driving cars while delivering shipable intermediaries. Our mission has nothing to do with selling a million boxes. It's tawdry.
Only if I felt someone could accelerate that mission and wanted to keep it open source. And like, not just wanted to, I don't believe what anyone says. I believe incentives. If a company wanted to buy Comma where their incentives were to keep it open source, but Comma doesn't stop at the cars. The cars are just the beginning. The device is a human head. The device has two eyes, two ears.
Only if I felt someone could accelerate that mission and wanted to keep it open source. And like, not just wanted to, I don't believe what anyone says. I believe incentives. If a company wanted to buy Comma where their incentives were to keep it open source, but Comma doesn't stop at the cars. The cars are just the beginning. The device is a human head. The device has two eyes, two ears.
It breathes air. It has a mouth.
It breathes air. It has a mouth.
We sell common bodies too. They're very rudimentary. But one of the problems that we're running into is that the comma three has about as much intelligence as a B. If you want a human's worth of intelligence, you're going to need a tiny rack, not even a tiny box. You're going to need like a tiny rack, maybe even more.
We sell common bodies too. They're very rudimentary. But one of the problems that we're running into is that the comma three has about as much intelligence as a B. If you want a human's worth of intelligence, you're going to need a tiny rack, not even a tiny box. You're going to need like a tiny rack, maybe even more.
You don't. And there's no way you can. You connect to it wirelessly. So you put your tiny box or your tiny rack in your house, and then you get your comma body, and your comma body runs the models on that. It's close, right? You don't have to go to some cloud, which is 30 milliseconds away. You go to a thing, which is 0.1 milliseconds away.
You don't. And there's no way you can. You connect to it wirelessly. So you put your tiny box or your tiny rack in your house, and then you get your comma body, and your comma body runs the models on that. It's close, right? You don't have to go to some cloud, which is 30 milliseconds away. You go to a thing, which is 0.1 milliseconds away.
I mean, eventually, if you fast forward 20, 30 years, the mobile chips will get good enough to run these AIs. But fundamentally, it's not even a question of putting legs on a tiny box because how are you getting 1.5 kilowatts of power on that thing, right? So you need, they're very synergistic businesses. I also want to build all of Comma's training computers.
I mean, eventually, if you fast forward 20, 30 years, the mobile chips will get good enough to run these AIs. But fundamentally, it's not even a question of putting legs on a tiny box because how are you getting 1.5 kilowatts of power on that thing, right? So you need, they're very synergistic businesses. I also want to build all of Comma's training computers.
Comma builds training computers right now. We use commodity parts. I think I can do it cheaper. So we're going to build, TinyCorp is going to not just sell TinyBox. TinyBox is the consumer version, but I'll build training data centers too.
Comma builds training computers right now. We use commodity parts. I think I can do it cheaper. So we're going to build, TinyCorp is going to not just sell TinyBox. TinyBox is the consumer version, but I'll build training data centers too.
He went to work at OpenAI.
He went to work at OpenAI.
Oh man, like, you know, his streams are just a level of quality so far beyond mine. myself like it's just it's just you know yeah he's good he wants to teach you yeah I want to show you that I'm smarter than you
Oh man, like, you know, his streams are just a level of quality so far beyond mine. myself like it's just it's just you know yeah he's good he wants to teach you yeah I want to show you that I'm smarter than you
And he'll do it.
And he'll do it.
Yeah.
Yeah.
So we're actually going to build that, I think. But it's not going to be one static tape. I think the human brain is too complex to be stuck in one static tape like that. If you look at like ant brains, maybe they can be stuck on a static tape. But we're going to build that using generative models. We're going to build the TikTok that you actually can't look away from.
So we're actually going to build that, I think. But it's not going to be one static tape. I think the human brain is too complex to be stuck in one static tape like that. If you look at like ant brains, maybe they can be stuck on a static tape. But we're going to build that using generative models. We're going to build the TikTok that you actually can't look away from.
MicroGrad was, yeah, inspiration for TinyGrad.
MicroGrad was, yeah, inspiration for TinyGrad.
The whole, I mean, his CS231N was, this was the inspiration. This is what I just took and ran with and ended up writing this.
The whole, I mean, his CS231N was, this was the inspiration. This is what I just took and ran with and ended up writing this.
So, you know.
So, you know.
Don't go work for Darth Vader, man.
Don't go work for Darth Vader, man.
I know they are. And that's kind of what's even like more. And you know what? It's not that OpenAI doesn't open source the weights of GPT-4. It's that they go in front of Congress. And that is what upsets me. You know, we had two effective altruist Sams go in front of Congress. One's in jail.
I know they are. And that's kind of what's even like more. And you know what? It's not that OpenAI doesn't open source the weights of GPT-4. It's that they go in front of Congress. And that is what upsets me. You know, we had two effective altruist Sams go in front of Congress. One's in jail.
One's in jail.
One's in jail.
No, I think effective altruism is a terribly evil ideology.
No, I think effective altruism is a terribly evil ideology.
Because you get Sam Bankman Freed. Like, Sam Bankman Freed is the embodiment of effective altruism. Utilitarianism is an abhorrent ideology. Like, well, yeah, we're going to kill those three people to save a thousand, of course. Yeah. Right? There's no underlying, like, there's just, yeah.
Because you get Sam Bankman Freed. Like, Sam Bankman Freed is the embodiment of effective altruism. Utilitarianism is an abhorrent ideology. Like, well, yeah, we're going to kill those three people to save a thousand, of course. Yeah. Right? There's no underlying, like, there's just, yeah.
Oh, well, I think charity is bad, right? So what is charity but investment that you don't expect to have a return on, right?
Oh, well, I think charity is bad, right? So what is charity but investment that you don't expect to have a return on, right?
And probably almost always that involves starting a company.
And probably almost always that involves starting a company.
Yeah. If you just take the money and you spend it on malaria nets, you know, okay, great. You've made 100 malaria nets. But if you teach... Yeah.
Yeah. If you just take the money and you spend it on malaria nets, you know, okay, great. You've made 100 malaria nets. But if you teach... Yeah.
I like the flip side of effective altruism, effective accelerationism. I think accelerationism is the only thing that's ever lifted people out of poverty. The fact that food is cheap. Not we're giving food away because we are kind-hearted people. No, food is cheap. And that's the world you want to live in. UBI, what a scary idea. What a scary idea. All your power now? Your money is power?
I like the flip side of effective altruism, effective accelerationism. I think accelerationism is the only thing that's ever lifted people out of poverty. The fact that food is cheap. Not we're giving food away because we are kind-hearted people. No, food is cheap. And that's the world you want to live in. UBI, what a scary idea. What a scary idea. All your power now? Your money is power?
Your only source of power is granted to you by the goodwill of the government? What a scary idea.
Your only source of power is granted to you by the goodwill of the government? What a scary idea.
I'd rather die than need UBI to survive, and I mean it.
I'd rather die than need UBI to survive, and I mean it.
You can make survival guaranteed without UBI. What you have to do is make housing and food dirt cheap. And that's the good world. And actually, let's go into what we should really be making dirt cheap, which is energy. That energy that, you know, oh my God, like, you know, that's, if there's one, I'm pretty centrist politically. If there's one political position I cannot stand, it's deceleration.
You can make survival guaranteed without UBI. What you have to do is make housing and food dirt cheap. And that's the good world. And actually, let's go into what we should really be making dirt cheap, which is energy. That energy that, you know, oh my God, like, you know, that's, if there's one, I'm pretty centrist politically. If there's one political position I cannot stand, it's deceleration.
It's people who believe we should use less energy.
It's people who believe we should use less energy.
Yeah.
Yeah.
Not people who believe global warming is a problem. I agree with you. Not people who believe that, you know, saving the environment is good. I agree with you. But people who think we should use less energy, that energy usage is a moral bad. No. Yeah. No, you are asking, you are diminishing humanity.
Not people who believe global warming is a problem. I agree with you. Not people who believe that, you know, saving the environment is good. I agree with you. But people who think we should use less energy, that energy usage is a moral bad. No. Yeah. No, you are asking, you are diminishing humanity.
How do we make more of it? How do we make it clean? And how do we make, just, just, just, how do I pay, you know, 20 cents for a megawatt hour instead of a kilowatt hour?
How do we make more of it? How do we make it clean? And how do we make, just, just, just, how do I pay, you know, 20 cents for a megawatt hour instead of a kilowatt hour?
You know, we need to, I wish there were more, more Elons in the world. Yeah. I think Elon sees it as like, this is a political battle that needed to be fought.
You know, we need to, I wish there were more, more Elons in the world. Yeah. I think Elon sees it as like, this is a political battle that needed to be fought.
And again, like, you know, I always ask the question of whenever I disagree with him, I remind myself that he's a billionaire and I'm not. So, you know, maybe he's got something figured out that I don't, or maybe he doesn't.
And again, like, you know, I always ask the question of whenever I disagree with him, I remind myself that he's a billionaire and I'm not. So, you know, maybe he's got something figured out that I don't, or maybe he doesn't.
And it must be so hard. It must be so hard to meet people once you get to that point where.
And it must be so hard. It must be so hard to meet people once you get to that point where.
See, I love not having shit. Like, I don't have shit, man. Trust me, there's nothing I can give you.
See, I love not having shit. Like, I don't have shit, man. Trust me, there's nothing I can give you.
There's nothing worth taking from me, you know?
There's nothing worth taking from me, you know?
And all the hate too.
And all the hate too.
So the content is being generated by, let's say, one humanity worth of intelligence. And you can quantify a humanity, right? That's a... You know, it's... exaflops, yadaflops, but you can quantify it. Once that generation is being done by 100 humanities, you're done.
So the content is being generated by, let's say, one humanity worth of intelligence. And you can quantify a humanity, right? That's a... You know, it's... exaflops, yadaflops, but you can quantify it. Once that generation is being done by 100 humanities, you're done.
And it keeps this absolutely fake PSYOP political divide alive so that the 1% can keep power.
And it keeps this absolutely fake PSYOP political divide alive so that the 1% can keep power.
No, no. No, I'm not that methodical.
No, no. No, I'm not that methodical.
I think that there comes to a point where if it's no longer visceral, I just can't enjoy it. I still viscerally love programming.
I think that there comes to a point where if it's no longer visceral, I just can't enjoy it. I still viscerally love programming.
I mean, just my computer in general. I mean, you know, I tell my girlfriend, my first love is my computer, of course. Like, you know, I sleep with my computer. It's there for a lot of my sexual experiences. Like, come on, so is everyone's, right? Like, you know, you gotta be real about that.
I mean, just my computer in general. I mean, you know, I tell my girlfriend, my first love is my computer, of course. Like, you know, I sleep with my computer. It's there for a lot of my sexual experiences. Like, come on, so is everyone's, right? Like, you know, you gotta be real about that.
The fact that, yeah, I mean, it's, you know, I wish it was, and someday they'll be smarter and someday, you know, maybe I'm weird for this, but I don't discriminate, man. I'm not going to discriminate biostack life and silicon stack life. Like,
The fact that, yeah, I mean, it's, you know, I wish it was, and someday they'll be smarter and someday, you know, maybe I'm weird for this, but I don't discriminate, man. I'm not going to discriminate biostack life and silicon stack life. Like,
No, you see, no, no, no. But VS Code is, no, they're just doing that. Microsoft's doing that to try to get me hooked on it. I'll see through it.
No, you see, no, no, no. But VS Code is, no, they're just doing that. Microsoft's doing that to try to get me hooked on it. I'll see through it.
I'll see through it. It's gold digger, man. It's gold digger.
I'll see through it. It's gold digger, man. It's gold digger.
Well, this just gets more interesting, right?
Well, this just gets more interesting, right?
Oh, absolutely. No, no, no. Look, I think Microsoft, again, I wouldn't count on it to be true forever, but I think right now Microsoft is doing the best work in the programming world. Like between GitHub, GitHub Actions, VS Code, the improvements to Python, where's Microsoft? Like...
Oh, absolutely. No, no, no. Look, I think Microsoft, again, I wouldn't count on it to be true forever, but I think right now Microsoft is doing the best work in the programming world. Like between GitHub, GitHub Actions, VS Code, the improvements to Python, where's Microsoft? Like...
Right? Right?
Right? Right?
How things change.
How things change.
By the way, that's who I bet on to replace Google, by the way.
By the way, that's who I bet on to replace Google, by the way.
Microsoft.
Microsoft.
Satya Nadella said straight up, I'm coming for it.
Satya Nadella said straight up, I'm coming for it.
I think we're a long way away from that. But I would not be surprised if in the next five years, Bing overtakes Google as a search engine.
I think we're a long way away from that. But I would not be surprised if in the next five years, Bing overtakes Google as a search engine.
Wouldn't surprise me.
Wouldn't surprise me.
Interesting.
Interesting.
It might be some startup too. I would equally bet on some startup.
It might be some startup too. I would equally bet on some startup.
To win.
To win.
Of course.
Of course.
I don't know. I haven't figured out what the game is yet, but when I do, I want to win.
I don't know. I haven't figured out what the game is yet, but when I do, I want to win.
I think the game is to stand eye to eye with God.
I think the game is to stand eye to eye with God.
I mean, this is what, like, I don't know. This is some, this is some, there's probably some ego trip of mine, you know? Like, you want to stand eye to eye with God. He's just blasphemous, man. Okay. I don't know. I don't know. I don't know if it would upset God. I think he, like, wants that. I mean, I certainly want that for my creations. I want my creations to stand eye to eye with me.
I mean, this is what, like, I don't know. This is some, this is some, there's probably some ego trip of mine, you know? Like, you want to stand eye to eye with God. He's just blasphemous, man. Okay. I don't know. I don't know. I don't know if it would upset God. I think he, like, wants that. I mean, I certainly want that for my creations. I want my creations to stand eye to eye with me.
So why wouldn't God want me to stand eye to eye with him? That's the best I can do, golden rule.
So why wouldn't God want me to stand eye to eye with him? That's the best I can do, golden rule.
I only watched season one of Westworld, but yeah, we got to find the maze and solve it.
I only watched season one of Westworld, but yeah, we got to find the maze and solve it.
I wrote a blog post. I reread Genesis and just looked like, they give you some clues at the end of Genesis for finding the Garden of Eden. And I'm interested. I'm interested.
I wrote a blog post. I reread Genesis and just looked like, they give you some clues at the end of Genesis for finding the Garden of Eden. And I'm interested. I'm interested.
Thank you. Great to be here.
Thank you. Great to be here.
Yeah.
Yeah.
I don't even know what it'll look like, right? Like again, you can't imagine the behaviors of something smarter than you, but a super intelligent, an agent that just dominates your intelligence so much will be able to completely manipulate you.
I don't even know what it'll look like, right? Like again, you can't imagine the behaviors of something smarter than you, but a super intelligent, an agent that just dominates your intelligence so much will be able to completely manipulate you.
You see? And that's the whole AI safety thing. It's not the machine that's going to do that. It's other humans using the machine that are going to do that to you.
You see? And that's the whole AI safety thing. It's not the machine that's going to do that. It's other humans using the machine that are going to do that to you.
The machine is a machine. Yeah. But the human gets the machine. And there's a lot of humans out there very interested in manipulating you.
The machine is a machine. Yeah. But the human gets the machine. And there's a lot of humans out there very interested in manipulating you.
Yes, but maybe for a different reason.
Yes, but maybe for a different reason.
Okay. Why didn't nuclear weapons kill everyone?
Okay. Why didn't nuclear weapons kill everyone?
I think there's an answer. I think it's actually very hard to deploy nuclear weapons tactically. it's very hard to accomplish tactical objectives. Great. I can nuke their country. I have an irradiated pile of rubble. I don't want that.
I think there's an answer. I think it's actually very hard to deploy nuclear weapons tactically. it's very hard to accomplish tactical objectives. Great. I can nuke their country. I have an irradiated pile of rubble. I don't want that.
Why don't I want an irradiated pile of rubble? Yeah. For all the reasons no one wants an irradiated pile of rubble.
Why don't I want an irradiated pile of rubble? Yeah. For all the reasons no one wants an irradiated pile of rubble.
Yeah, what you want, a total victory in a war is not usually the irradiation and eradication of the people there. It's the subjugation and domination of the people.
Yeah, what you want, a total victory in a war is not usually the irradiation and eradication of the people there. It's the subjugation and domination of the people.
It's somewhat surprising, but you see, it's the little red button that's going to be pressed with AI that's going to, you know, and that's why we die. It's not because the AI, if there's anything in the nature of AI, it's just the nature of humanity.
It's somewhat surprising, but you see, it's the little red button that's going to be pressed with AI that's going to, you know, and that's why we die. It's not because the AI, if there's anything in the nature of AI, it's just the nature of humanity.
Sure. So I think the most... Obvious way to me is wireheading. We end up amusing ourselves to death. We end up all staring at that infinite TikTok and forgetting to eat. Maybe it's even more benign than this. Maybe we all just stop reproducing. Now, to be fair, it's probably hard to get all of humanity.
Sure. So I think the most... Obvious way to me is wireheading. We end up amusing ourselves to death. We end up all staring at that infinite TikTok and forgetting to eat. Maybe it's even more benign than this. Maybe we all just stop reproducing. Now, to be fair, it's probably hard to get all of humanity.
I mean, diversity in humanity is... With due respect. I wish I was more weird. No, like I'm kind of, look, I'm drinking smart water, man. That's like a Coca-Cola product, right?
I mean, diversity in humanity is... With due respect. I wish I was more weird. No, like I'm kind of, look, I'm drinking smart water, man. That's like a Coca-Cola product, right?
I went corporate. No, the amount of diversity in humanity I think is decreasing. Just like all the other biodiversity on the planet. Yeah. Right?
I went corporate. No, the amount of diversity in humanity I think is decreasing. Just like all the other biodiversity on the planet. Yeah. Right?
Go eat McDonald's in China.
Go eat McDonald's in China.
Yeah. No, it's the interconnectedness that's doing it.
Yeah. No, it's the interconnectedness that's doing it.
There is. In a bunker. To be fair, do I think AI kills us all? I think AI kills everything we call society today. I do not think it actually kills the human species. I think that's actually incredibly hard to do.
There is. In a bunker. To be fair, do I think AI kills us all? I think AI kills everything we call society today. I do not think it actually kills the human species. I think that's actually incredibly hard to do.
Yeah, but some of us do. And they'll be okay and they'll rebuild after the great AI.
Yeah, but some of us do. And they'll be okay and they'll rebuild after the great AI.
Whoa, whoa, whoa. They're going to be religiously against that.
Whoa, whoa, whoa. They're going to be religiously against that.
Sure. I mean, it'll be like, you know, some kind of Amish looking kind of thing, I think. I think they're going to have very strong taboos against technology.
Sure. I mean, it'll be like, you know, some kind of Amish looking kind of thing, I think. I think they're going to have very strong taboos against technology.
What's interesting about everything we build, I think we're going to build super intelligence before we build any sort of robustness in the AI. We cannot build an AI that is capable of going out into nature and surviving like a bird, right? A bird is an incredibly robust organism. We've built nothing like this. We haven't built a machine that's capable of reproducing.
What's interesting about everything we build, I think we're going to build super intelligence before we build any sort of robustness in the AI. We cannot build an AI that is capable of going out into nature and surviving like a bird, right? A bird is an incredibly robust organism. We've built nothing like this. We haven't built a machine that's capable of reproducing.
Let's just focus on them reproducing, right? Do they have microchips in them? Okay. Then do they include a fab?
Let's just focus on them reproducing, right? Do they have microchips in them? Okay. Then do they include a fab?
Then how are they going to reproduce?
Then how are they going to reproduce?
Yeah, but then you're really moving away from robustness. Yes. All of life is capable of reproducing without needing to go to a repair shop. Life will continue to reproduce in the complete absence of civilization. Robots will not. So if the AI apocalypse happens...
Yeah, but then you're really moving away from robustness. Yes. All of life is capable of reproducing without needing to go to a repair shop. Life will continue to reproduce in the complete absence of civilization. Robots will not. So if the AI apocalypse happens...
I mean, the AIs are going to probably die out because I think we're going to get, again, super intelligence long before we get robustness.
I mean, the AIs are going to probably die out because I think we're going to get, again, super intelligence long before we get robustness.
Well, that'd be very interesting. I'm interested in building that.
Well, that'd be very interesting. I'm interested in building that.
Very, very hard.
Very, very hard.
And then they remember that you're going to have to have a fab.
And then they remember that you're going to have to have a fab.
Why is that hard? Well, because it's not, I mean, a 3D printer is a very simple machine, right? Okay, you're going to print chips? You're going to have an atomic printer? How are you going to dope the silicon?
Why is that hard? Well, because it's not, I mean, a 3D printer is a very simple machine, right? Okay, you're going to print chips? You're going to have an atomic printer? How are you going to dope the silicon?
Yeah. Right?
Yeah. Right?
How are you going to etch the silicon?
How are you going to etch the silicon?
Yeah, but structural type of robots aren't going to have the intelligence required to survive in any complex environment.
Yeah, but structural type of robots aren't going to have the intelligence required to survive in any complex environment.
I don't think this works. I mean, again, like ants at their very core are made up of cells that are capable of individually reproducing. They're doing quite a lot of computation that we're taking for granted. It's not even just the computation. It's that reproduction is so inherent. Okay, so like there's two stacks of life in the world. There's the biological stack and the silicon stack.
I don't think this works. I mean, again, like ants at their very core are made up of cells that are capable of individually reproducing. They're doing quite a lot of computation that we're taking for granted. It's not even just the computation. It's that reproduction is so inherent. Okay, so like there's two stacks of life in the world. There's the biological stack and the silicon stack.
The biological stack starts with reproduction. Reproduction is at the absolute core. The first proto-RNA organisms were capable of reproducing. The silicon stack, despite as far as it's come, is nowhere near being able to reproduce.
The biological stack starts with reproduction. Reproduction is at the absolute core. The first proto-RNA organisms were capable of reproducing. The silicon stack, despite as far as it's come, is nowhere near being able to reproduce.
Yeah.
Yeah.
Even if you did put a fab on the machine, right? Let's say, okay, you know, we can build fabs. We know how to do that as humanity. We can probably put all the precursors that build all the machines and the fabs also in the machine. So first off, this machine is going to be absolutely massive.
Even if you did put a fab on the machine, right? Let's say, okay, you know, we can build fabs. We know how to do that as humanity. We can probably put all the precursors that build all the machines and the fabs also in the machine. So first off, this machine is going to be absolutely massive.
I mean, we almost have a, like, think of the size of the thing required to reproduce a machine today, right? Like, is our civilization capable of reproduction? Can we reproduce our civilization on Mars?
I mean, we almost have a, like, think of the size of the thing required to reproduce a machine today, right? Like, is our civilization capable of reproduction? Can we reproduce our civilization on Mars?
I believe that Twitter can be run by 50 people. I think that this is going to take most of, like, it's just most of society, right? Like we live in one globalized world.
I believe that Twitter can be run by 50 people. I think that this is going to take most of, like, it's just most of society, right? Like we live in one globalized world.
Oh, okay. You're talking about, yeah, okay. So you're talking about the humans reproducing and like basically like what's the smallest self-sustaining colony of humans?
Oh, okay. You're talking about, yeah, okay. So you're talking about the humans reproducing and like basically like what's the smallest self-sustaining colony of humans?
Yeah, okay, fine. But they're not going to be making five nanometer chips.
Yeah, okay, fine. But they're not going to be making five nanometer chips.
Maybe. Or maybe they'll watch our colony die out over here and be like, we're not making chips.
Maybe. Or maybe they'll watch our colony die out over here and be like, we're not making chips.
Don't make chips.
Don't make chips.
Whatever you do, don't make chips. Chips are what led to their downfall.
Whatever you do, don't make chips. Chips are what led to their downfall.
Do you need that asshole? That's the question, right? Humanity works really hard today to get rid of that asshole, but I think they might be important.
Do you need that asshole? That's the question, right? Humanity works really hard today to get rid of that asshole, but I think they might be important.
I like to think it's just like another stack for life. Like we have like the biostack life, like we're a biostack life and then the silicon stack life.
I like to think it's just like another stack for life. Like we have like the biostack life, like we're a biostack life and then the silicon stack life.
Oh, no, we don't know what the ceiling is for the biostack either. The biostack just seemed to move slower. You have Moore's Law, which is not dead despite many proclamations.
Oh, no, we don't know what the ceiling is for the biostack either. The biostack just seemed to move slower. You have Moore's Law, which is not dead despite many proclamations.
And you don't have anything like this in the biostack. So I have a meme that I posted. I tried to make a meme. It didn't work too well. But I posted a picture of Ronald Reagan and Joe Biden. And you look, this is 1980 and this is 2020. And these two humans are basically like the same. There's been no change in humans in the last 40 years.
And you don't have anything like this in the biostack. So I have a meme that I posted. I tried to make a meme. It didn't work too well. But I posted a picture of Ronald Reagan and Joe Biden. And you look, this is 1980 and this is 2020. And these two humans are basically like the same. There's been no change in humans in the last 40 years.
And then I posted a computer from 1980 and a computer from 2020. Wow.
And then I posted a computer from 1980 and a computer from 2020. Wow.
Oh, yeah.
Oh, yeah.
Yeah.
Yeah.
I've been ready for a long time.
I've been ready for a long time.
I love it.
I love it.
Yeah.
Yeah.
Judging from what you can buy today, far. Very far.
Judging from what you can buy today, far. Very far.
I mean, the headsets just are not quite at eye resolution yet. I haven't put on any headset where I'm like, oh, this could be the real world. Whereas when I put good headphones on, audio is there. We can reproduce audio that I'm like, I'm actually in a jungle right now. If I close my eyes, I can't tell I'm not.
I mean, the headsets just are not quite at eye resolution yet. I haven't put on any headset where I'm like, oh, this could be the real world. Whereas when I put good headphones on, audio is there. We can reproduce audio that I'm like, I'm actually in a jungle right now. If I close my eyes, I can't tell I'm not.
Or humans want to believe.
Or humans want to believe.
Humans want to believe so much that people think the large language models are conscious. That's how much humans want to believe.
Humans want to believe so much that people think the large language models are conscious. That's how much humans want to believe.
I don't think I'm conscious.
I don't think I'm conscious.
It's like what it seems to mean to people. It's just like a word that atheists use for souls.
It's like what it seems to mean to people. It's just like a word that atheists use for souls.
If consciousness is a spectrum, I'm definitely way more conscious than the large language models are. I think the large language models are less conscious than a chicken.
If consciousness is a spectrum, I'm definitely way more conscious than the large language models are. I think the large language models are less conscious than a chicken.
In Miami, like a couple months ago.
In Miami, like a couple months ago.
There's living chickens walking around Miami. It's crazy.
There's living chickens walking around Miami. It's crazy.
Yeah.
Yeah.
A chicken, yeah.
A chicken, yeah.
Humans want to believe so much that if I took a rock and a Sharpie and drew a sad face on the rock, they'd think the rock is sad.
Humans want to believe so much that if I took a rock and a Sharpie and drew a sad face on the rock, they'd think the rock is sad.
No.
No.
Yeah, I mean, it's interesting that like human systems seem to claim that they're conscious. And I guess it kind of like says something in a straight up like, okay, what do people mean when, even if you don't believe in consciousness, what do people mean when they say consciousness? And there's definitely like meanings to it.
Yeah, I mean, it's interesting that like human systems seem to claim that they're conscious. And I guess it kind of like says something in a straight up like, okay, what do people mean when, even if you don't believe in consciousness, what do people mean when they say consciousness? And there's definitely like meanings to it.
Pizza.
Pizza.
I like cheese pizza.
I like cheese pizza.
No, I don't like pineapple.
No, I don't like pineapple.
As they put any ham on it, oh, that's real bad.
As they put any ham on it, oh, that's real bad.
Oh, that's my favorite.
Oh, that's my favorite.
If that's the word you want to use to describe it, sure. I'm not going to deny that that feeling exists. I'm not going to deny that I experienced that feeling. When, I guess what I kind of take issue to is that there's some like, like, how does it feel to be a web server? Do 404s hurt? Not yet. How would you know what suffering looked like?
If that's the word you want to use to describe it, sure. I'm not going to deny that that feeling exists. I'm not going to deny that I experienced that feeling. When, I guess what I kind of take issue to is that there's some like, like, how does it feel to be a web server? Do 404s hurt? Not yet. How would you know what suffering looked like?
Sure, you can recognize a suffering dog because we're the same stack as the dog. All the biostack stuff kind of, especially mammals, you know, it's really easy. Game recognizes game. Yeah. Versus the silicon stack stuff, it's like, you have no idea. You have, wow, the little thing has learned to mimic, you know. But then I realized that that's all we are too.
Sure, you can recognize a suffering dog because we're the same stack as the dog. All the biostack stuff kind of, especially mammals, you know, it's really easy. Game recognizes game. Yeah. Versus the silicon stack stuff, it's like, you have no idea. You have, wow, the little thing has learned to mimic, you know. But then I realized that that's all we are too.
Oh, look, the little thing has learned to mimic.
Oh, look, the little thing has learned to mimic.
The definition of consciousness is how close something looks to human. Sure, I'll give you that one.
The definition of consciousness is how close something looks to human. Sure, I'll give you that one.
Sure. It's a very anthropocentric definition, but... Well, that's all we got. Sure. No, and I don't mean to like... I think there's a lot of value in it. Look, I just started my second company. My third company will be AI Girlfriends.
Sure. It's a very anthropocentric definition, but... Well, that's all we got. Sure. No, and I don't mean to like... I think there's a lot of value in it. Look, I just started my second company. My third company will be AI Girlfriends.
Yeah, but okay, so here's where it actually gets totally different, right? When you interact with another human, you can make some assumptions, right? When you interact with these models, you can't. You can make some assumptions that that other human experiences suffering and pleasure in a pretty similar way to you do. The golden rule applies. With an AI model, this isn't really true.
Yeah, but okay, so here's where it actually gets totally different, right? When you interact with another human, you can make some assumptions, right? When you interact with these models, you can't. You can make some assumptions that that other human experiences suffering and pleasure in a pretty similar way to you do. The golden rule applies. With an AI model, this isn't really true.
These large language models are good at fooling people because they were trained on a whole bunch of human data and told to mimic it.
These large language models are good at fooling people because they were trained on a whole bunch of human data and told to mimic it.
Yeah.
Yeah.
Yeah.
Yeah.
I made some chatbots. I gave them backstories. It was lots of fun. I was so happy when Llama came out.
I made some chatbots. I gave them backstories. It was lots of fun. I was so happy when Llama came out.
To be fair, like, you know, something that people generally look for when they're looking for someone to date is intelligence in some form. And the rock doesn't really have intelligence. Only a pretty desperate person would date a rock. I think we're all desperate deep down. Oh, not rock level desperate.
To be fair, like, you know, something that people generally look for when they're looking for someone to date is intelligence in some form. And the rock doesn't really have intelligence. Only a pretty desperate person would date a rock. I think we're all desperate deep down. Oh, not rock level desperate.
Oh, I agree. And you know what? I won't even say this so cynically. I will actually say this in a way that like, I want AI friends. I do. Yeah. Like I would love to, you know, again, the language models now are still a little, like people are impressed with these GPT things. And I look at like, or like, or the co-pilot, the coding one. And I'm like, okay, this is like junior engineer level.
Oh, I agree. And you know what? I won't even say this so cynically. I will actually say this in a way that like, I want AI friends. I do. Yeah. Like I would love to, you know, again, the language models now are still a little, like people are impressed with these GPT things. And I look at like, or like, or the co-pilot, the coding one. And I'm like, okay, this is like junior engineer level.
And these people are like Fiverr level artists and copywriters. Like, okay, great. We got like Fiverr and like junior engineers. Okay, cool. Like, and this is just the start and it will get better, right? Like I can't wait to have AI friends who are more intelligent than I am.
And these people are like Fiverr level artists and copywriters. Like, okay, great. We got like Fiverr and like junior engineers. Okay, cool. Like, and this is just the start and it will get better, right? Like I can't wait to have AI friends who are more intelligent than I am.
That's up to you and your human partner to define.
That's up to you and your human partner to define.
Yeah, you have to have that conversation, I guess.
Yeah, you have to have that conversation, I guess.
No, I mean, it's similar kind of to porn.
No, I mean, it's similar kind of to porn.
Yeah. I think people in relationships have different views on that.
Yeah. I think people in relationships have different views on that.
The porn one is a good branching off point. Like these things, you know, one of my scenarios that I put in my chat bot is I, you know, a nice girl named Lexi. She's 20. She just moved out to LA. She wanted to be an actress, but she started doing OnlyFans instead. And you're on a date with her. Enjoy. Yeah.
The porn one is a good branching off point. Like these things, you know, one of my scenarios that I put in my chat bot is I, you know, a nice girl named Lexi. She's 20. She just moved out to LA. She wanted to be an actress, but she started doing OnlyFans instead. And you're on a date with her. Enjoy. Yeah.
I mean, these are all things for people to define in their relationships. What it means to be human is just gonna start to get weird.
I mean, these are all things for people to define in their relationships. What it means to be human is just gonna start to get weird.
Do you know about shadow banning?
Do you know about shadow banning?
Shadow banning, okay, you post, no one can see it. Heaven banning, you post, no one can see it, but a whole lot of AIs are spun up to interact with you.
Shadow banning, okay, you post, no one can see it. Heaven banning, you post, no one can see it, but a whole lot of AIs are spun up to interact with you.
There's a great... It's called My Little Pony Friendship is Optimal. It's a sci-fi story that explores this idea.
There's a great... It's called My Little Pony Friendship is Optimal. It's a sci-fi story that explores this idea.
Friendship is optimal.
Friendship is optimal.
I want it. Look, I want it. If no one else wants it, I want it.
I want it. Look, I want it. If no one else wants it, I want it.
And I'll feel their loneliness and, you know, it just will only advertise to you some of the time.
And I'll feel their loneliness and, you know, it just will only advertise to you some of the time.
This interesting path from rationality to polyamory. Yeah, that doesn't make sense for me.
This interesting path from rationality to polyamory. Yeah, that doesn't make sense for me.
The crazy thing is like, Culture is whatever we define it as, right? These things are not, like, is-ought problem in moral philosophy, right? There's no, like, okay, what is might be that, like, computers are capable of mimicking, you know, girlfriends perfectly. They passed the girlfriend Turing test, right? But that doesn't say anything about ought.
The crazy thing is like, Culture is whatever we define it as, right? These things are not, like, is-ought problem in moral philosophy, right? There's no, like, okay, what is might be that, like, computers are capable of mimicking, you know, girlfriends perfectly. They passed the girlfriend Turing test, right? But that doesn't say anything about ought.
That doesn't say anything about how we ought to respond to them as a civilization. That doesn't say we ought to get rid of monogamy, right? That's a completely separate question, really a religious one.
That doesn't say anything about how we ought to respond to them as a civilization. That doesn't say we ought to get rid of monogamy, right? That's a completely separate question, really a religious one.
No, I mean, of course, my AI girlfriends, their goal is to pass the Girlfriend Turing Test.
No, I mean, of course, my AI girlfriends, their goal is to pass the Girlfriend Turing Test.
Yeah, I mean, you know, look, we're a company. We don't have to get everybody. We just have to get a large enough clientele to stay with us.
Yeah, I mean, you know, look, we're a company. We don't have to get everybody. We just have to get a large enough clientele to stay with us.
All right.
All right.
I started TinyGrad as like a toy project just to teach myself, okay, like what is a convolution? What are all these options you can pass to them? What is the derivative of a convolution, right? Very similar to Karpathy wrote MicroGrad. Very similar. And then I started realizing, I started thinking about like AI chips. I started thinking about chips that run
I started TinyGrad as like a toy project just to teach myself, okay, like what is a convolution? What are all these options you can pass to them? What is the derivative of a convolution, right? Very similar to Karpathy wrote MicroGrad. Very similar. And then I started realizing, I started thinking about like AI chips. I started thinking about chips that run
And I was like, well, okay, this is going to be a really big problem. If NVIDIA becomes a monopoly here, how long before NVIDIA is nationalized?
And I was like, well, okay, this is going to be a really big problem. If NVIDIA becomes a monopoly here, how long before NVIDIA is nationalized?
Yeah.
Yeah.
If NVIDIA becomes just like 10X better than everything else, you're giving a big advantage to somebody who can secure NVIDIA as a resource. Yeah. In fact, if Jensen watches this podcast, he may want to consider this. He may want to consider making sure his company is not nationalized.
If NVIDIA becomes just like 10X better than everything else, you're giving a big advantage to somebody who can secure NVIDIA as a resource. Yeah. In fact, if Jensen watches this podcast, he may want to consider this. He may want to consider making sure his company is not nationalized.
Oh, yes.
Oh, yes.
So we have Nvidia and AMD. Great.
So we have Nvidia and AMD. Great.
Have you seen it? Google loves to rent you TPUs.
Have you seen it? Google loves to rent you TPUs.
So I started work on a, uh, I was like, okay, what's it going to take to make a chip? And my first notions were all completely wrong about why, about like how you could improve on GPUs. And I will take this, this is from Jim Keller on your podcast. And this is one of my absolute favorite descriptions of computation.
So I started work on a, uh, I was like, okay, what's it going to take to make a chip? And my first notions were all completely wrong about why, about like how you could improve on GPUs. And I will take this, this is from Jim Keller on your podcast. And this is one of my absolute favorite descriptions of computation.
So there's three kinds of computation paradigms that are common in the world today. There's CPUs, and CPUs can do everything. CPUs can do add and multiply, they can do load and store, and they can do compare and branch. And when I say they can do these things, they can do them all fast, right?
So there's three kinds of computation paradigms that are common in the world today. There's CPUs, and CPUs can do everything. CPUs can do add and multiply, they can do load and store, and they can do compare and branch. And when I say they can do these things, they can do them all fast, right?
So compare and branch are unique to CPUs, and what I mean by they can do them fast is they can do things like branch prediction and speculative execution, and they spend tons of transistors on these super deep reorder buffers in order to make these things fast. Then you have a simpler computation model, GPUs. GPUs can't really do compare and branch. I mean, they can, but it's horrendously slow.
So compare and branch are unique to CPUs, and what I mean by they can do them fast is they can do things like branch prediction and speculative execution, and they spend tons of transistors on these super deep reorder buffers in order to make these things fast. Then you have a simpler computation model, GPUs. GPUs can't really do compare and branch. I mean, they can, but it's horrendously slow.
But GPUs can do arbitrary load and store. GPUs can do things like X, dereference Y. So they can fetch from arbitrary pieces of memory. They can fetch from memory that is defined by the contents of the data. The third model of computation is DSPs. And DSPs are just add and multiply. They can do load and stores, but only static load and stores.
But GPUs can do arbitrary load and store. GPUs can do things like X, dereference Y. So they can fetch from arbitrary pieces of memory. They can fetch from memory that is defined by the contents of the data. The third model of computation is DSPs. And DSPs are just add and multiply. They can do load and stores, but only static load and stores.
Only loads and stores that are known before the program runs. And you look at neural networks today, and 95% of neural networks are all the DSP paradigm. They are just statically scheduled adds and multiplies. So TinyGuard really took this idea, and I'm still working on it, to extend this as far as possible. Every stage of the stack has Turing completeness.
Only loads and stores that are known before the program runs. And you look at neural networks today, and 95% of neural networks are all the DSP paradigm. They are just statically scheduled adds and multiplies. So TinyGuard really took this idea, and I'm still working on it, to extend this as far as possible. Every stage of the stack has Turing completeness.
All right, Python has Turing completeness, and then we take Python, we go into C++, which is Turing complete, and maybe C++ calls into some CUDA kernels, which are Turing complete. The CUDA kernels go through LLVM, which is Turing complete, into PTX, which is Turing complete, to SAS, which is Turing complete, on a Turing complete processor. I wanna get Turing completeness out of the stack entirely.
All right, Python has Turing completeness, and then we take Python, we go into C++, which is Turing complete, and maybe C++ calls into some CUDA kernels, which are Turing complete. The CUDA kernels go through LLVM, which is Turing complete, into PTX, which is Turing complete, to SAS, which is Turing complete, on a Turing complete processor. I wanna get Turing completeness out of the stack entirely.
Because once you get rid of Turing completeness, you can reason about things. Rice's theorem and the halting problem do not apply to admiral machines.
Because once you get rid of Turing completeness, you can reason about things. Rice's theorem and the halting problem do not apply to admiral machines.
Every layer of the stack. Every layer. Every layer of the stack, removing Turing completeness allows you to reason about things, right? So the reason you need to do branch prediction in a CPU and the reason it's prediction, and the branch predictors are, I think they're like 99% on CPUs. Why do they get 1% of them wrong? Well, they get 1% wrong because you can't know. Right?
Every layer of the stack. Every layer. Every layer of the stack, removing Turing completeness allows you to reason about things, right? So the reason you need to do branch prediction in a CPU and the reason it's prediction, and the branch predictors are, I think they're like 99% on CPUs. Why do they get 1% of them wrong? Well, they get 1% wrong because you can't know. Right?
That's the halting problem. It's equivalent to the halting problem to say whether a branch is going to be taken or not. I can show that. But the AdMob machine, the neural network, runs the identical compute every time. The only thing that changes is the data. So when you realize this, you think about, okay, how can we build a computer?
That's the halting problem. It's equivalent to the halting problem to say whether a branch is going to be taken or not. I can show that. But the AdMob machine, the neural network, runs the identical compute every time. The only thing that changes is the data. So when you realize this, you think about, okay, how can we build a computer?
How can we build a stack that takes maximal advantage of this idea? So what makes TinyGrad different from other neural network libraries is it does not have a primitive operator even for matrix multiplication. And this is every single one. They even have primitive operations for things like convolutions.
How can we build a stack that takes maximal advantage of this idea? So what makes TinyGrad different from other neural network libraries is it does not have a primitive operator even for matrix multiplication. And this is every single one. They even have primitive operations for things like convolutions.
No matmul. Well, here's what a matmul is. So I'll use my hands to talk here. So if you think about a cube and I put my two matrices that I'm multiplying on two faces of the cube, right? You can think about the matrix multiply as, okay, the n cubed, I'm going to multiply for each one in the cubed. And then I'm going to do a sum, which is a reduce up to here to the third face of the cube.
No matmul. Well, here's what a matmul is. So I'll use my hands to talk here. So if you think about a cube and I put my two matrices that I'm multiplying on two faces of the cube, right? You can think about the matrix multiply as, okay, the n cubed, I'm going to multiply for each one in the cubed. And then I'm going to do a sum, which is a reduce up to here to the third face of the cube.
And that's your multiplied matrix. So what a matrix multiply is, is a bunch of shape operations, right? A bunch of permute three shapes and expands on the two matrices. A multiply, n cubed. A reduce, n cubed, which gives you an n squared matrix.
And that's your multiplied matrix. So what a matrix multiply is, is a bunch of shape operations, right? A bunch of permute three shapes and expands on the two matrices. A multiply, n cubed. A reduce, n cubed, which gives you an n squared matrix.
So TinyGrad has about 20. And you can compare TinyGrad's op set or IR to things like XLA or PrimTorch. So XLA and PrimTorch are ideas where like, okay, Torch has like 2000 different kernels. PyTorch 2.0 introduced PrimTorch, which has only 250. TinyGrad has order of magnitude 25. It's 10x less than XLA or Primtorch. And you can think about it as kind of like RISC versus CISC, right?
So TinyGrad has about 20. And you can compare TinyGrad's op set or IR to things like XLA or PrimTorch. So XLA and PrimTorch are ideas where like, okay, Torch has like 2000 different kernels. PyTorch 2.0 introduced PrimTorch, which has only 250. TinyGrad has order of magnitude 25. It's 10x less than XLA or Primtorch. And you can think about it as kind of like RISC versus CISC, right?
These other things are CISC-like systems. TinyGrad is RISC.
These other things are CISC-like systems. TinyGrad is RISC.
RISC architecture is going to change everything. 1995, hackers.
RISC architecture is going to change everything. 1995, hackers.
Angelina Jolie delivers the line, risk architecture is going to change everything in 1995. Wow. And here we are with ARM in the phones. And ARM everywhere.
Angelina Jolie delivers the line, risk architecture is going to change everything in 1995. Wow. And here we are with ARM in the phones. And ARM everywhere.
Sure. Okay, so you have unary ops, which take in a tensor and return a tensor of the same size and do some unary op to it. X, log, reciprocal, sine, right? They take in one and they're point-wise.
Sure. Okay, so you have unary ops, which take in a tensor and return a tensor of the same size and do some unary op to it. X, log, reciprocal, sine, right? They take in one and they're point-wise.
Yeah, ReLU. Almost all activation functions are unary ops. Some combinations of unary ops together is still a unary op. Then you have binary ops. Binary ops are like pointwise addition, multiplication, division, compare. It takes in two tensors of equal size and outputs one tensor. Then you have reduce ops.
Yeah, ReLU. Almost all activation functions are unary ops. Some combinations of unary ops together is still a unary op. Then you have binary ops. Binary ops are like pointwise addition, multiplication, division, compare. It takes in two tensors of equal size and outputs one tensor. Then you have reduce ops.
Reduce ops will take a three-dimensional tensor and turn it into a two-dimensional tensor, or a three-dimensional tensor and turn it into a zero-dimensional tensor. Think like a sum or a max are really the common ones there. And then the fourth type is movement ops. And movement ops are different from the other types because they don't actually require computation.
Reduce ops will take a three-dimensional tensor and turn it into a two-dimensional tensor, or a three-dimensional tensor and turn it into a zero-dimensional tensor. Think like a sum or a max are really the common ones there. And then the fourth type is movement ops. And movement ops are different from the other types because they don't actually require computation.
They require different ways to look at memory. So that includes reshapes, permutes, expands, flips. Those are the main ones, probably.
They require different ways to look at memory. So that includes reshapes, permutes, expands, flips. Those are the main ones, probably.
And convolutions. And every convolution you can imagine, dilated convolutions, strided convolutions, transposed convolutions.
And convolutions. And every convolution you can imagine, dilated convolutions, strided convolutions, transposed convolutions.
Sure. So if you type in PyTorch A times B plus C, what this is going to do is it's going to first multiply A and B and store that result into memory. And then it is going to add C by reading that result from memory, reading C from memory, and writing that out to memory. There is way more loads and stores to memory than you need there.
Sure. So if you type in PyTorch A times B plus C, what this is going to do is it's going to first multiply A and B and store that result into memory. And then it is going to add C by reading that result from memory, reading C from memory, and writing that out to memory. There is way more loads and stores to memory than you need there.
If you don't actually do A times B as soon as you see it, if you wait until the user actually realizes that tensor, until the laziness actually resolves, you confuse that plus C. This is like, it's the same way Haskell works.
If you don't actually do A times B as soon as you see it, if you wait until the user actually realizes that tensor, until the laziness actually resolves, you confuse that plus C. This is like, it's the same way Haskell works.
So TinyGrad's front end looks very similar to PyTorch. I probably could make a perfect or pretty close to perfect interop layer if I really wanted to. I think that there's some things that are nicer about TinyGrad syntax than PyTorch, but the front end looks very Torch-like. You can also load in Onyx models. We have more Onyx tests passing than Core ML.
So TinyGrad's front end looks very similar to PyTorch. I probably could make a perfect or pretty close to perfect interop layer if I really wanted to. I think that there's some things that are nicer about TinyGrad syntax than PyTorch, but the front end looks very Torch-like. You can also load in Onyx models. We have more Onyx tests passing than Core ML.
Okay, so... We'll pass Onyx runtime soon.
Okay, so... We'll pass Onyx runtime soon.
By the way, I really like PyTorch. I think that it's actually a very good piece of software. I think that they've made a few different trade-offs, and these different trade-offs are where TinyGrad takes a different path. One of the biggest differences is it's really easy to see the kernels that are actually being sent to the GPU.
By the way, I really like PyTorch. I think that it's actually a very good piece of software. I think that they've made a few different trade-offs, and these different trade-offs are where TinyGrad takes a different path. One of the biggest differences is it's really easy to see the kernels that are actually being sent to the GPU.
If you run PyTorch on the GPU, you like do some operation and you don't know what kernels ran. You don't know how many kernels ran. You don't know how many flops were used. You don't know how much memory accesses were used. TinyGrad type debug equals two. And it will show you in this beautiful style, every kernel that's run, how many flops and how many bytes.
If you run PyTorch on the GPU, you like do some operation and you don't know what kernels ran. You don't know how many kernels ran. You don't know how many flops were used. You don't know how much memory accesses were used. TinyGrad type debug equals two. And it will show you in this beautiful style, every kernel that's run, how many flops and how many bytes.
TinyGrad solves the problem of porting new ML accelerators quickly. One of the reasons, tons of these companies now, I think Sequoia marked GraphCore to zero, right? Cerebus, TensTorrent, Grok. All of these ML accelerator companies, they built chips. The chips were good. The software was terrible. And part of the reason is because I think the same problem is happening with Dojo.
TinyGrad solves the problem of porting new ML accelerators quickly. One of the reasons, tons of these companies now, I think Sequoia marked GraphCore to zero, right? Cerebus, TensTorrent, Grok. All of these ML accelerator companies, they built chips. The chips were good. The software was terrible. And part of the reason is because I think the same problem is happening with Dojo.
It's really, really hard to write a PyTorch port because you have to write 250 kernels and you have to tune them all for performance.
It's really, really hard to write a PyTorch port because you have to write 250 kernels and you have to tune them all for performance.
Look, my prediction for Ten's Torrent is that they're going to pivot to making RISC-V chips. CPUs. CPUs.
Look, my prediction for Ten's Torrent is that they're going to pivot to making RISC-V chips. CPUs. CPUs.
Because AI accelerators are a software problem, not really a hardware problem.
Because AI accelerators are a software problem, not really a hardware problem.
I think what's going to happen is if I can finish... Okay. If you're trying to make an AI accelerator... You better have the capability of writing a torch-level performance stack on NVIDIA GPUs.
I think what's going to happen is if I can finish... Okay. If you're trying to make an AI accelerator... You better have the capability of writing a torch-level performance stack on NVIDIA GPUs.
If you can't write a torch stack on NVIDIA GPUs, and I mean all the way, I mean down to the driver, there's no way you're going to be able to write it on your chip, because your chip's worse than an NVIDIA GPU. The first version of the chip you tape out, it's definitely worse.
If you can't write a torch stack on NVIDIA GPUs, and I mean all the way, I mean down to the driver, there's no way you're going to be able to write it on your chip, because your chip's worse than an NVIDIA GPU. The first version of the chip you tape out, it's definitely worse.
Yes. And not only that, actually, the chip that you tape out, almost always because you're trying to get advantage over NVIDIA, you're specializing the hardware more. It's always harder to write software for more specialized hardware. Like a GPU is pretty generic. And if you can't write an NVIDIA stack, there's no way you can write a stack for your chip.
Yes. And not only that, actually, the chip that you tape out, almost always because you're trying to get advantage over NVIDIA, you're specializing the hardware more. It's always harder to write software for more specialized hardware. Like a GPU is pretty generic. And if you can't write an NVIDIA stack, there's no way you can write a stack for your chip.
So my approach with TinyGrad is first, write a performant NVIDIA stack. We're targeting AMD.
So my approach with TinyGrad is first, write a performant NVIDIA stack. We're targeting AMD.
With love.
With love.
It's like the Yankees, you know? I'm a Mets fan.
It's like the Yankees, you know? I'm a Mets fan.
Well, let's start with the fact that the 7900 XTX kernel drivers don't work. And if you run demo apps in loops, it panics the kernel.
Well, let's start with the fact that the 7900 XTX kernel drivers don't work. And if you run demo apps in loops, it panics the kernel.
Lisa Su responded to my email.
Lisa Su responded to my email.
Oh. I reached out. I was like, this is, you know, really? Like, I understand if your 7x7 transposed Winograd conv is slower than NVIDIA's, but literally when I run demo apps in a loop, the kernel panics.
Oh. I reached out. I was like, this is, you know, really? Like, I understand if your 7x7 transposed Winograd conv is slower than NVIDIA's, but literally when I run demo apps in a loop, the kernel panics.
I just literally took their demo apps and wrote like while true semicolon do the app semicolon done in a bunch of screens. This is like the most primitive fuzz testing.
I just literally took their demo apps and wrote like while true semicolon do the app semicolon done in a bunch of screens. This is like the most primitive fuzz testing.
They're changing. They're trying to change. They're trying to change. And I had a pretty positive interaction with them this week. Last week, I went on YouTube. I was just like, that's it. I give up on AMD. Like, this is their driver. I'm not going to, you know, I'll go with Intel GPUs. Intel GPUs have better drivers.
They're changing. They're trying to change. They're trying to change. And I had a pretty positive interaction with them this week. Last week, I went on YouTube. I was just like, that's it. I give up on AMD. Like, this is their driver. I'm not going to, you know, I'll go with Intel GPUs. Intel GPUs have better drivers.
Yeah, and I'd like to extend that diversification to everything. I'd like to diversify the, right, the more, my central thesis about the world is there's things that centralize power and they're bad. And there's things that decentralize power and they're good. Everything I can do to help decentralize power, I'd like to do.
Yeah, and I'd like to extend that diversification to everything. I'd like to diversify the, right, the more, my central thesis about the world is there's things that centralize power and they're bad. And there's things that decentralize power and they're good. Everything I can do to help decentralize power, I'd like to do.
I'd like to help them with software. No, actually, the only ASIC that is remotely successful is Google's TPU. And the only reason that's successful is because Google wrote a machine learning framework. I think that you have to write a competitive machine learning framework in order to be able to build an ASIC.
I'd like to help them with software. No, actually, the only ASIC that is remotely successful is Google's TPU. And the only reason that's successful is because Google wrote a machine learning framework. I think that you have to write a competitive machine learning framework in order to be able to build an ASIC.
They have one. They have an internal one.
They have one. They have an internal one.
I don't want a cloud.
I don't want a cloud.
I don't like cloud.
I don't like cloud.
Fundamental limitation of cloud is who owns the off switch.
Fundamental limitation of cloud is who owns the off switch.
Yeah.
Yeah.
Well, you shouldn't build one. You should buy a box from the Tiny Corp.
Well, you shouldn't build one. You should buy a box from the Tiny Corp.
It's called the tiny box.
It's called the tiny box.
It's $15,000. And it's almost a pay to flop of compute. It's over 100 gigabytes of GPU RAM. It's over five terabytes per second of GPU memory bandwidth. I'm going to put like four NVMEs in RAID. You're going to get like 20, 30 gigabytes per second of drive read bandwidth. I'm going to build like the best deep learning box that I can that plugs into one wall outlet.
It's $15,000. And it's almost a pay to flop of compute. It's over 100 gigabytes of GPU RAM. It's over five terabytes per second of GPU memory bandwidth. I'm going to put like four NVMEs in RAID. You're going to get like 20, 30 gigabytes per second of drive read bandwidth. I'm going to build like the best deep learning box that I can that plugs into one wall outlet.
Yeah. So it's almost a pay-to-flop of compute.
Yeah. So it's almost a pay-to-flop of compute.
Today, I'm leaning toward AMD. Okay. Um, but we're pretty agnostic to the type of compute. The, the, the main limiting spec is a 120 volt, 15 amp circuit.
Today, I'm leaning toward AMD. Okay. Um, but we're pretty agnostic to the type of compute. The, the, the main limiting spec is a 120 volt, 15 amp circuit.
Okay.
Okay.
Well, I mean it because in order to like, like there's a plug over there, right? You have to be able to plug it in. Um, we're also going to sell the tiny rack, which like, what's the most power you can get into your house without arousing suspicion? Uh, and one of the, one of the answers is an electric car charger.
Well, I mean it because in order to like, like there's a plug over there, right? You have to be able to plug it in. Um, we're also going to sell the tiny rack, which like, what's the most power you can get into your house without arousing suspicion? Uh, and one of the, one of the answers is an electric car charger.
A wall outlet is about 1,500 watts. A car charger is about 10,000 watts. Is that it?
A wall outlet is about 1,500 watts. A car charger is about 10,000 watts. Is that it?
Again, probably 7900 XTXs, but maybe 3090s, maybe A770s.
Again, probably 7900 XTXs, but maybe 3090s, maybe A770s.
I'm still exploring. I want to deliver a really good experience to people. And yeah, what GPUs I end up going with, again, I'm leaning toward AMD. We'll see. You know, in my email, what I said to AMD is like, just dumping the code on GitHub is not open source. Open source is a culture. Open source means that your issues are not all one-year-old stale issues. Open source means developing in public.
I'm still exploring. I want to deliver a really good experience to people. And yeah, what GPUs I end up going with, again, I'm leaning toward AMD. We'll see. You know, in my email, what I said to AMD is like, just dumping the code on GitHub is not open source. Open source is a culture. Open source means that your issues are not all one-year-old stale issues. Open source means developing in public.
And if you guys can commit to that, I see a real future for AMD as a competitor to NVIDIA.
And if you guys can commit to that, I see a real future for AMD as a competitor to NVIDIA.
We're taking pre-orders. I took this from Elon. I'm like $100 fully refundable pre-orders.
We're taking pre-orders. I took this from Elon. I'm like $100 fully refundable pre-orders.
No, I'll try to do it faster. It's a lot simpler. It's a lot simpler than a truck.
No, I'll try to do it faster. It's a lot simpler. It's a lot simpler than a truck.
The thing that I want to deliver to people out of the box is being able to run 65 billion parameter Lama in FP16 in real time. In like a good, like 10 tokens per second or five tokens per second or something.
The thing that I want to deliver to people out of the box is being able to run 65 billion parameter Lama in FP16 in real time. In like a good, like 10 tokens per second or five tokens per second or something.
Yeah, or I think Falcon is the new one. Experience a chat with the largest language model that you can have in your house.
Yeah, or I think Falcon is the new one. Experience a chat with the largest language model that you can have in your house.
From a wall plug, yeah. Actually, for inference, it's not like even more power would help you get more. Even more power wouldn't get you more. Well, no, the biggest model released is 65 billion parameter Lama, as far as I know.
From a wall plug, yeah. Actually, for inference, it's not like even more power would help you get more. Even more power wouldn't get you more. Well, no, the biggest model released is 65 billion parameter Lama, as far as I know.
That one's harder, actually.
That one's harder, actually.
The boyfriend's harder, yeah.
The boyfriend's harder, yeah.
Because women are attracted to status and power and men are attracted to youth and beauty. No, I mean, that's what I mean.
Because women are attracted to status and power and men are attracted to youth and beauty. No, I mean, that's what I mean.
No machines do not have any status or real power.
No machines do not have any status or real power.
But status fundamentally is a zero-sum game, whereas youth and beauty are not.
But status fundamentally is a zero-sum game, whereas youth and beauty are not.
I just think that that's why it's harder. You know, yeah, maybe it is my biases. I think status is way easier to fake. I also think that, you know, men are probably more desperate and more likely to buy my product. So maybe they're a better target market.
I just think that that's why it's harder. You know, yeah, maybe it is my biases. I think status is way easier to fake. I also think that, you know, men are probably more desperate and more likely to buy my product. So maybe they're a better target market.
Yeah. Look, I mean, look, I know you can look at porn viewership numbers, right? A lot more men watch porn than women. Yeah. You can ask why that is.
Yeah. Look, I mean, look, I know you can look at porn viewership numbers, right? A lot more men watch porn than women. Yeah. You can ask why that is.
Oh, man. And I'll tell you why it's six. Yeah. So AMD EPYC processors have 128 lanes of PCIe. I want to leave enough lanes for some drives, and I want to leave enough lanes for some networking.
Oh, man. And I'll tell you why it's six. Yeah. So AMD EPYC processors have 128 lanes of PCIe. I want to leave enough lanes for some drives, and I want to leave enough lanes for some networking.
Ah, that's one of the big challenges. Not only do I want the cooling to be good, I want it to be quiet. I want the tiny box to be able to sit comfortably in your room.
Ah, that's one of the big challenges. Not only do I want the cooling to be good, I want it to be quiet. I want the tiny box to be able to sit comfortably in your room.
I'll give a more, I mean, I can talk about how it relates to company number one.
I'll give a more, I mean, I can talk about how it relates to company number one.
No, no, quiet because you want to put this thing in your house and you want it to coexist with you. If it's screaming at 60 dB, you don't want that in your house. You'll kick it out.
No, no, quiet because you want to put this thing in your house and you want it to coexist with you. If it's screaming at 60 dB, you don't want that in your house. You'll kick it out.
Yeah, I want like 40, 45.
Yeah, I want like 40, 45.
A key trick is to actually make it big. Ironically, it's called the tiny box. But if I can make it big, a lot of that noise is generated because of high pressure air. If you look at like a 1U server, a 1U server has these super high pressure fans. They're like super deep and they're like genesis. Versus if you have something that's big, well, I can use a big, you know, they call them big ass fans.
A key trick is to actually make it big. Ironically, it's called the tiny box. But if I can make it big, a lot of that noise is generated because of high pressure air. If you look at like a 1U server, a 1U server has these super high pressure fans. They're like super deep and they're like genesis. Versus if you have something that's big, well, I can use a big, you know, they call them big ass fans.
Those ones that are like huge on the ceiling and they're completely silent.
Those ones that are like huge on the ceiling and they're completely silent.
It is the... I do not want it to be large according to UPS. I want it to be shippable as a normal package, but that's my constraint there.
It is the... I do not want it to be large according to UPS. I want it to be shippable as a normal package, but that's my constraint there.
No, it has to be... Well, you're... Look, I want to give you a great out-of-the-box experience. I want you to lift this thing out. I want it to be like the Mac, you know? TinyBox.
No, it has to be... Well, you're... Look, I want to give you a great out-of-the-box experience. I want you to lift this thing out. I want it to be like the Mac, you know? TinyBox.
Yeah. We did a poll. If people want Ubuntu or Arch, we're going to stick with Ubuntu.
Yeah. We did a poll. If people want Ubuntu or Arch, we're going to stick with Ubuntu.
There's a really simple way to get these models into TinyGrad and you can just export them as ONIX and then TinyGrad can run ONIX. So the ports that I did of Lama, Stable Diffusion, and now Whisper are more academic to teach me about the models, but they are cleaner than the PyTorch versions. You can read the code. I think the code is easier to read. It's less lines.
There's a really simple way to get these models into TinyGrad and you can just export them as ONIX and then TinyGrad can run ONIX. So the ports that I did of Lama, Stable Diffusion, and now Whisper are more academic to teach me about the models, but they are cleaner than the PyTorch versions. You can read the code. I think the code is easier to read. It's less lines.
There's just a few things about the way TinyGrid writes things. Here's a complaint I have about PyTorch. nn.relu is a class, right? So when you create an nn module, you'll put your nn.relus in an int. And this makes no sense. ReLU is completely stateless. Why should that be a class?
There's just a few things about the way TinyGrid writes things. Here's a complaint I have about PyTorch. nn.relu is a class, right? So when you create an nn module, you'll put your nn.relus in an int. And this makes no sense. ReLU is completely stateless. Why should that be a class?
Oh, no, it doesn't have a cost on performance. But yeah, no, I think that it's... That's what I mean about TinyGrad's front end being cleaner.
Oh, no, it doesn't have a cost on performance. But yeah, no, I think that it's... That's what I mean about TinyGrad's front end being cleaner.
I think that there is a spectrum and like on one side you have Mojo and on the other side you have like GGML. GGML is this like we're going to run Llama fast on Mac. And okay, we're going to expand out to a little bit, but we're going to basically go like depth first, right? Mojo is like, we're going to go breadth first. We're going to go so wide that we're going to make all of Python fast.
I think that there is a spectrum and like on one side you have Mojo and on the other side you have like GGML. GGML is this like we're going to run Llama fast on Mac. And okay, we're going to expand out to a little bit, but we're going to basically go like depth first, right? Mojo is like, we're going to go breadth first. We're going to go so wide that we're going to make all of Python fast.
And TinyGrad's in the middle. TinyGrad is, we are going to make neural networks fast.
And TinyGrad's in the middle. TinyGrad is, we are going to make neural networks fast.
Yeah, but they have turn completeness.
Yeah, but they have turn completeness.
My goal is step one, build an equally performance stack to PyTorch on NVIDIA and AMD, but with way less lines. And then step two is, okay, how do we make an accelerator, right? But you need step one. You have to first build the framework before you can build the accelerator.
My goal is step one, build an equally performance stack to PyTorch on NVIDIA and AMD, but with way less lines. And then step two is, okay, how do we make an accelerator, right? But you need step one. You have to first build the framework before you can build the accelerator.
So I'm much more of a, like, build it the right way and worry about performance later. There's a bunch of things where I haven't even, like, really dove into performance. The only place where TinyGrad is competitive performance-wise right now is on Qualcomm GPUs. So TinyGrad's actually used an open pilot to run the model. So the driving model is TinyGrad. When did that happen, that transition?
So I'm much more of a, like, build it the right way and worry about performance later. There's a bunch of things where I haven't even, like, really dove into performance. The only place where TinyGrad is competitive performance-wise right now is on Qualcomm GPUs. So TinyGrad's actually used an open pilot to run the model. So the driving model is TinyGrad. When did that happen, that transition?
About eight months ago now. And it's 2x faster than Qualcomm's library.
About eight months ago now. And it's 2x faster than Qualcomm's library.
It's a Snapdragon 845. Okay. So this is using the GPU. So the GPU is an Adreno GPU. There's like different things. There's a really good Microsoft paper that talks about like mobile GPUs and why they're different from desktop GPUs. One of the big things is in a desktop GPU, you can use buffers. On a mobile GPU, image textures are a lot faster.
It's a Snapdragon 845. Okay. So this is using the GPU. So the GPU is an Adreno GPU. There's like different things. There's a really good Microsoft paper that talks about like mobile GPUs and why they're different from desktop GPUs. One of the big things is in a desktop GPU, you can use buffers. On a mobile GPU, image textures are a lot faster.
I want to be able to leverage it in a way that it's completely generic, right? So there's a lot of this. Xiaomi has a pretty good open source library for mobile GPUs called Mace, where they can generate, where they have these kernels, but they're all hand-coded, right? So that's great if you're doing three by three confs. That's great if you're doing dense map models.
I want to be able to leverage it in a way that it's completely generic, right? So there's a lot of this. Xiaomi has a pretty good open source library for mobile GPUs called Mace, where they can generate, where they have these kernels, but they're all hand-coded, right? So that's great if you're doing three by three confs. That's great if you're doing dense map models.
But the minute you go off the beaten path a tiny bit, well, your performance is nothing.
But the minute you go off the beaten path a tiny bit, well, your performance is nothing.
You know, almost no one talks about FSD anymore, and even less people talk about OpenPilot. We've solved the problem. Like, we solved it years ago.
You know, almost no one talks about FSD anymore, and even less people talk about OpenPilot. We've solved the problem. Like, we solved it years ago.
Solving means how do you build a model that outputs a human policy for driving? How do you build a model that, given a reasonable set of sensors, outputs a human policy for driving? So you have companies like Waymo and Cruise, which are hand-coding these things that are like quasi-human policies.
Solving means how do you build a model that outputs a human policy for driving? How do you build a model that, given a reasonable set of sensors, outputs a human policy for driving? So you have companies like Waymo and Cruise, which are hand-coding these things that are like quasi-human policies.
Then you have Tesla, and maybe even to more of an extent, Coma, asking, okay, how do we just learn the human policy from data? The big thing that we're doing now, and we just put it out on Twitter, at the beginning of Comma, we published a paper called Learning a Driving Simulator. And the way this thing worked was it was an autoencoder and then an RNN in the middle. Right.
Then you have Tesla, and maybe even to more of an extent, Coma, asking, okay, how do we just learn the human policy from data? The big thing that we're doing now, and we just put it out on Twitter, at the beginning of Comma, we published a paper called Learning a Driving Simulator. And the way this thing worked was it was an autoencoder and then an RNN in the middle. Right.
You take an auto encoder, you compress the picture, you use an RNN, predict the next state. And these things were, you know, it was a laughably bad simulator, right? This is 2015 era machine learning technology. Today we have VQVAE and transformers. We're building drive GPT basically.
You take an auto encoder, you compress the picture, you use an RNN, predict the next state. And these things were, you know, it was a laughably bad simulator, right? This is 2015 era machine learning technology. Today we have VQVAE and transformers. We're building drive GPT basically.
It's trained on all the driving data to predict the next frame.
It's trained on all the driving data to predict the next frame.
Well, actually our simulator is conditioned on the pose. So it's actually a simulator. You can put in like a state action pair and get out the next state. Okay. And then once you have a simulator, you can do RL in the simulator and RL will get us that human policy.
Well, actually our simulator is conditioned on the pose. So it's actually a simulator. You can put in like a state action pair and get out the next state. Okay. And then once you have a simulator, you can do RL in the simulator and RL will get us that human policy.
Yeah. RL with a reward function, not asking is this close to the human policy, but asking would a human disengage if you did this behavior?
Yeah. RL with a reward function, not asking is this close to the human policy, but asking would a human disengage if you did this behavior?
It's a nice... It's asking exactly the right question. What will make our customers happy?
It's a nice... It's asking exactly the right question. What will make our customers happy?
A system that you never want to disengage.
A system that you never want to disengage.
Usually. There's some that are just, I felt like driving. And those are always fine too. But they're just going to look like noise in the data.
Usually. There's some that are just, I felt like driving. And those are always fine too. But they're just going to look like noise in the data.
Maybe, yeah.
Maybe, yeah.
It's hard to say. We haven't completely closed the loop yet. So we don't have anything built that truly looks like that architecture yet. Mm-hmm. We have prototypes and there's bugs. So we are a couple bug fixes away. Might take a year, might take 10.
It's hard to say. We haven't completely closed the loop yet. So we don't have anything built that truly looks like that architecture yet. Mm-hmm. We have prototypes and there's bugs. So we are a couple bug fixes away. Might take a year, might take 10.
They're just like stupid bugs. And also we might just need more scale. We just massively expanded our compute cluster at Gamma. We now have about two people worth of compute, 40 petaflops.
They're just like stupid bugs. And also we might just need more scale. We just massively expanded our compute cluster at Gamma. We now have about two people worth of compute, 40 petaflops.
Diversity is very important in data. Yeah, I mean, we have, so we have about, I think we have like 5,000 daily actives.
Diversity is very important in data. Yeah, I mean, we have, so we have about, I think we have like 5,000 daily actives.
Tesla is always one to two years ahead of us. They've always been one to two years ahead of us. And they probably always will be because they're not doing anything wrong.
Tesla is always one to two years ahead of us. They've always been one to two years ahead of us. And they probably always will be because they're not doing anything wrong.
I mean, I know they're moving toward more of an end-to-end approach.
I mean, I know they're moving toward more of an end-to-end approach.
They also have a very fancy simulator. They're probably saying all the same things we are. They're probably saying we just need to optimize, you know, what is the reward? We get negative reward for disengagement, right? Like, everyone kind of knows this. It's just a question of who can actually build and deploy the system.
They also have a very fancy simulator. They're probably saying all the same things we are. They're probably saying we just need to optimize, you know, what is the reward? We get negative reward for disengagement, right? Like, everyone kind of knows this. It's just a question of who can actually build and deploy the system.
Yeah, and the hardware to run it.
Yeah, and the hardware to run it.
I have a compute cluster in my office. 800 amps.
I have a compute cluster in my office. 800 amps.
It's 40 kilowatts at idle, our data center. Dives in crazy. 40 kilowatts just burning just when the computers are idle. Sorry, sorry, compute cluster. Compute cluster, I got it. It's not a data center.
It's 40 kilowatts at idle, our data center. Dives in crazy. 40 kilowatts just burning just when the computers are idle. Sorry, sorry, compute cluster. Compute cluster, I got it. It's not a data center.
No, data centers are clouds. We don't have clouds. Data centers have air conditioners. We have fans. That makes it a compute cluster.
No, data centers are clouds. We don't have clouds. Data centers have air conditioners. We have fans. That makes it a compute cluster.
We have a compute cluster.
We have a compute cluster.
Yeah, I don't think that there's, I think that they can reason better than a lot of people.
Yeah, I don't think that there's, I think that they can reason better than a lot of people.
I mean, I think that calculators can add better than a lot of people.
I mean, I think that calculators can add better than a lot of people.
making brilliancies in chess, which feels a lot like thought. Whatever new thing that AI can do, everybody thinks is brilliant. And then like 20 years go by and they're like, well, yeah, but chess, that's like mechanical. Like adding, that's like mechanical.
making brilliancies in chess, which feels a lot like thought. Whatever new thing that AI can do, everybody thinks is brilliant. And then like 20 years go by and they're like, well, yeah, but chess, that's like mechanical. Like adding, that's like mechanical.
You know, I sell phone calls to Kama for $1,000. And some guy called me and like, you know, it's $1,000. You can talk to me for half an hour. And he's like, yeah, okay. So like time doesn't exist. And I really wanted to share this with you. I'm like, oh, what do you mean time doesn't exist, right? I think time is a useful model, whether it exists or not, right? Does quantum physics exist?
You know, I sell phone calls to Kama for $1,000. And some guy called me and like, you know, it's $1,000. You can talk to me for half an hour. And he's like, yeah, okay. So like time doesn't exist. And I really wanted to share this with you. I'm like, oh, what do you mean time doesn't exist, right? I think time is a useful model, whether it exists or not, right? Does quantum physics exist?
The problem is if you go back to 1960 and you tell them that you have a machine that can play amazing chess, of course someone in 1960 will tell you that machine is intelligent. Someone in 2010 won't. What's changed, right? Today, we think that these machines that have language are intelligent, but I think in 20 years we're going to be like, yeah, but can it reproduce?
The problem is if you go back to 1960 and you tell them that you have a machine that can play amazing chess, of course someone in 1960 will tell you that machine is intelligent. Someone in 2010 won't. What's changed, right? Today, we think that these machines that have language are intelligent, but I think in 20 years we're going to be like, yeah, but can it reproduce?
Humans are always going to define a niche for themselves. Like, well, you know, we're better than the machines because we can, you know, and like they tried creative for a bit, but no one believes that one anymore.
Humans are always going to define a niche for themselves. Like, well, you know, we're better than the machines because we can, you know, and like they tried creative for a bit, but no one believes that one anymore.
Yeah, and I think maybe we're gonna go through that same thing with language and that same thing with creativity.
Yeah, and I think maybe we're gonna go through that same thing with language and that same thing with creativity.
The niche is getting smaller.
The niche is getting smaller.
Oh boy. But no, no, no, you don't understand. Humans are created by God and machines are created by humans. Therefore, right?
Oh boy. But no, no, no, you don't understand. Humans are created by God and machines are created by humans. Therefore, right?
Like that'll be the last niche we have.
Like that'll be the last niche we have.
I'd like to go back to when calculators first came out and, or computers. And like, I wasn't around, look, I'm 33 years old. And to like, see how that affected me.
I'd like to go back to when calculators first came out and, or computers. And like, I wasn't around, look, I'm 33 years old. And to like, see how that affected me.
But the poor milkman, the day he learned about refrigerators, he's like, I'm done.
But the poor milkman, the day he learned about refrigerators, he's like, I'm done.
You're telling me you can just keep the milk in your house? You don't even need to deliver it every day? I'm done.
You're telling me you can just keep the milk in your house? You don't even need to deliver it every day? I'm done.
I do think it's different this time, though. Yeah, it just feels like... The niche is getting smaller.
I do think it's different this time, though. Yeah, it just feels like... The niche is getting smaller.
I think we dramatize everything.
I think we dramatize everything.
I think that you asked the milkman when he saw refrigerators, and they're going to have one of these in every home?
I think that you asked the milkman when he saw refrigerators, and they're going to have one of these in every home?
I disagree, actually. I disagree. I think things like Mu Zero and AlphaGo are so much more impressive because these things are playing beyond the highest human level.
I disagree, actually. I disagree. I think things like Mu Zero and AlphaGo are so much more impressive because these things are playing beyond the highest human level.
Well, it doesn't matter. It's about whether it's a useful model to describe reality. Is time maybe compressive?
Well, it doesn't matter. It's about whether it's a useful model to describe reality. Is time maybe compressive?
The language models are writing middle school level essays and people are like, wow, it's a great essay.
The language models are writing middle school level essays and people are like, wow, it's a great essay.
It's a great five paragraph essay about the causes of the Civil War.
It's a great five paragraph essay about the causes of the Civil War.
That's the scariest kind of code. I spend 5% of time typing and 95% of time debugging. The last thing I want is close to correct code.
That's the scariest kind of code. I spend 5% of time typing and 95% of time debugging. The last thing I want is close to correct code.
I want a machine that can help me with the debugging, not with the typing.
I want a machine that can help me with the debugging, not with the typing.
I actually don't think it's like level two driving. I think driving is not tool complete and programming is. Meaning you don't use like the best possible tools to drive, right? You're not, you're not like, like, like cars have basically the same interface for the last 50 years.
I actually don't think it's like level two driving. I think driving is not tool complete and programming is. Meaning you don't use like the best possible tools to drive, right? You're not, you're not like, like, like cars have basically the same interface for the last 50 years.
Computers have a radically different interface.
Computers have a radically different interface.
So think about the difference between a car from 1980 and a car from today.
So think about the difference between a car from 1980 and a car from today.
No difference really. It's got a bunch of pedals. It's got a steering wheel. Maybe now it has a few ADAS features, but it's pretty much the same car. You have no problem getting into a 1980 car and driving it. You take a programmer today who spent their whole life doing JavaScript, and you put him in an Apple IIe prompt, and you tell him about the line numbers in BASIC.
No difference really. It's got a bunch of pedals. It's got a steering wheel. Maybe now it has a few ADAS features, but it's pretty much the same car. You have no problem getting into a 1980 car and driving it. You take a programmer today who spent their whole life doing JavaScript, and you put him in an Apple IIe prompt, and you tell him about the line numbers in BASIC.
But how do I insert something between line 17 and 18?
But how do I insert something between line 17 and 18?
Oh, well.
Oh, well.
Yes, it's IDEs, the languages, the runtimes. It's everything. And programming is tool complete. So like almost if Codex or Copilot are helping you, that actually probably means that your framework or library is bad and there's too much boilerplate in it.
Yes, it's IDEs, the languages, the runtimes. It's everything. And programming is tool complete. So like almost if Codex or Copilot are helping you, that actually probably means that your framework or library is bad and there's too much boilerplate in it.
TinyGrad is now 2,700 lines, and it can run LAMA and stable diffusion, and all of this stuff is in 2,700 lines. Boilerplate and abstraction indirections and all these things are just bad code.
TinyGrad is now 2,700 lines, and it can run LAMA and stable diffusion, and all of this stuff is in 2,700 lines. Boilerplate and abstraction indirections and all these things are just bad code.
I don't know.
I don't know.
Yeah, I guess if I was really writing, like, maybe today, if I wrote, like, a lot of, like, data parsing stuff.
Yeah, I guess if I was really writing, like, maybe today, if I wrote, like, a lot of, like, data parsing stuff.
Yeah.
Yeah.
I mean, I don't play CTFs anymore, but if I still play CTFs, a lot of, like, it's just, like, you have to write, like, a parser for this data format. Like, I wonder, or, like, admin of code. I wonder when the models are going to start to help with that kind of code. And they may. They may. And the models also may help you with speed. Yeah. And the models are very fast. Yeah.
I mean, I don't play CTFs anymore, but if I still play CTFs, a lot of, like, it's just, like, you have to write, like, a parser for this data format. Like, I wonder, or, like, admin of code. I wonder when the models are going to start to help with that kind of code. And they may. They may. And the models also may help you with speed. Yeah. And the models are very fast. Yeah.
But where the models won't, my programming speed is not at all limited by my typing speed. And in very few cases it is, yes. If I'm writing some script to just like parse some weird data format, sure, my programming speed is limited by my typing speed.
But where the models won't, my programming speed is not at all limited by my typing speed. And in very few cases it is, yes. If I'm writing some script to just like parse some weird data format, sure, my programming speed is limited by my typing speed.
I don't think it matters.
I don't think it matters.
You know... When I was at Twitter, I tried to use ChatGPT to ask some questions, like, what's the API for this? And it would just hallucinate. It would just give me completely made-up API functions that sounded real.
You know... When I was at Twitter, I tried to use ChatGPT to ask some questions, like, what's the API for this? And it would just hallucinate. It would just give me completely made-up API functions that sounded real.
Yes.
Yes.
If you are writing an absolute basic React app with a button, it's not going to hallucinate, sure. No, there's kind of ways to fix the hallucination problem. I think Facebook has an interesting paper. It's called Atlas. And it's actually weird the way that we do language models right now where all of the information is in the weights. And the human brain is not really like this.
If you are writing an absolute basic React app with a button, it's not going to hallucinate, sure. No, there's kind of ways to fix the hallucination problem. I think Facebook has an interesting paper. It's called Atlas. And it's actually weird the way that we do language models right now where all of the information is in the weights. And the human brain is not really like this.
It's like a hippocampus and a memory system. So why don't LLMs have a memory system? And there's people working on them. I think future LLMs are going to be like smaller, but are going to run looping on themselves and are going to have retrieval systems. And the thing about using a retrieval system is you can cite sources explicitly.
It's like a hippocampus and a memory system. So why don't LLMs have a memory system? And there's people working on them. I think future LLMs are going to be like smaller, but are going to run looping on themselves and are going to have retrieval systems. And the thing about using a retrieval system is you can cite sources explicitly.
Sure.
Sure.
That's going to kill Google.
That's going to kill Google.
When someone makes an LLM that's capable of citing its sources, it will kill Google.
When someone makes an LLM that's capable of citing its sources, it will kill Google.
That's what people want in a search engine.
That's what people want in a search engine.
Maybe.
Maybe.
I'd count them out.
I'd count them out.
I'm not trying to compete on that.
I'm not trying to compete on that.
Maybe.
Maybe.
When I started Comma, I said over and over again, I'm going to win self-driving cars. I still believe that. I have never said I'm going to win search with the tiny corp, and I'm never going to say that because I won't.
When I started Comma, I said over and over again, I'm going to win self-driving cars. I still believe that. I have never said I'm going to win search with the tiny corp, and I'm never going to say that because I won't.
So there are things that are real. Kolomogorov complexity is real.
So there are things that are real. Kolomogorov complexity is real.
Some startup's going to figure it out. I think if you ask me, like Google's still the number one webpage, I think by the end of the decade, Google won't be the number one webpage anymore.
Some startup's going to figure it out. I think if you ask me, like Google's still the number one webpage, I think by the end of the decade, Google won't be the number one webpage anymore.
Look, I would put a lot more money on Mark Zuckerberg.
Look, I would put a lot more money on Mark Zuckerberg.
Because Mark Zuckerberg's alive. Like, this is old Paul Graham essay. Startups are either alive or dead. Google's dead.
Because Mark Zuckerberg's alive. Like, this is old Paul Graham essay. Startups are either alive or dead. Google's dead.
Meta.
Meta.
You see what I mean? Like, that's just, like, Mark Zuckerberg, this is Mark Zuckerberg reading that Paul Graham essay and being like, I'm going to show everyone how alive we are. I'm going to change the name.
You see what I mean? Like, that's just, like, Mark Zuckerberg, this is Mark Zuckerberg reading that Paul Graham essay and being like, I'm going to show everyone how alive we are. I'm going to change the name.
Yeah. The compressive thing. Math is real.
Yeah. The compressive thing. Math is real.
When I listened to your Sam Altman podcast, he talked about the button. Everyone who talks about AI talks about the button, the button to turn it off, right? Do we have a button to turn off Google? Is anybody in the world capable of shutting Google down?
When I listened to your Sam Altman podcast, he talked about the button. Everyone who talks about AI talks about the button, the button to turn it off, right? Do we have a button to turn off Google? Is anybody in the world capable of shutting Google down?
Can we shut the search engine down?
Can we shut the search engine down?
Either.
Either.
Does Sundar Pichai have the authority to turn off google.com tomorrow?
Does Sundar Pichai have the authority to turn off google.com tomorrow?
Are you sure? No, they have the technical power, but do they have the authority? Let's say Sundar Pichai made this his sole mission, came into Google tomorrow and said, I'm going to shut google.com down.
Are you sure? No, they have the technical power, but do they have the authority? Let's say Sundar Pichai made this his sole mission, came into Google tomorrow and said, I'm going to shut google.com down.
And I think hard things are actually hard. I don't think P equals NP.
And I think hard things are actually hard. I don't think P equals NP.
I don't think he'd keep his position too long.
I don't think he'd keep his position too long.
Well, boards and shares and corporate undermining and, oh my God, our revenue is zero now.
Well, boards and shares and corporate undermining and, oh my God, our revenue is zero now.
Yeah. And it will have a, I mean, this is true for the AIs too, right? There's no turning the AIs off. There's no button. You can't press it. Now, does Mark Zuckerberg have that button for facebook.com?
Yeah. And it will have a, I mean, this is true for the AIs too, right? There's no turning the AIs off. There's no button. You can't press it. Now, does Mark Zuckerberg have that button for facebook.com?
I think he does. I think he does. And this is exactly what I mean and why I bet on him so much more than I bet on Google.
I think he does. I think he does. And this is exactly what I mean and why I bet on him so much more than I bet on Google.
Oh, Elon has the button. Yeah.
Oh, Elon has the button. Yeah.
Well, I think that's the majority.
Well, I think that's the majority.
Does Elon, can Elon fire the missiles? Can he fire the missiles?
Does Elon, can Elon fire the missiles? Can he fire the missiles?
I mean, you know, a rocket in an ICBM, you're a rocket that can land anywhere. Is that an ICBM? Well, you know, don't ask too many questions.
I mean, you know, a rocket in an ICBM, you're a rocket that can land anywhere. Is that an ICBM? Well, you know, don't ask too many questions.
I would bet on a startup.
I would bet on a startup.
I bet on something that looks like mid-journey, but for search.
I bet on something that looks like mid-journey, but for search.
The other thing that's gonna be cool is there is some aspect of a winner take all effect, right? Like once someone starts deploying a product that gets a lot of usage, and you see this with OpenAI, they are going to get the dataset to train future versions of the model.
The other thing that's gonna be cool is there is some aspect of a winner take all effect, right? Like once someone starts deploying a product that gets a lot of usage, and you see this with OpenAI, they are going to get the dataset to train future versions of the model.
They are going to be able to, you know, I was asked at Google Image Search when I worked there like almost 15 years ago now, how does Google know which image is an apple? And I said, the metadata. And they're like, yeah, that works about half the time. How does Google know? You'll see they're all apples on the front page when you search apple. And I don't know, I didn't come up with the answer.
They are going to be able to, you know, I was asked at Google Image Search when I worked there like almost 15 years ago now, how does Google know which image is an apple? And I said, the metadata. And they're like, yeah, that works about half the time. How does Google know? You'll see they're all apples on the front page when you search apple. And I don't know, I didn't come up with the answer.
For that one, I do.
For that one, I do.
The guy's like, well, it's what people click on when they search Apple. I'm like, oh, yeah.
The guy's like, well, it's what people click on when they search Apple. I'm like, oh, yeah.
Who would have thought that Mark Zuckerberg would be the good guy? I mean it.
Who would have thought that Mark Zuckerberg would be the good guy? I mean it.
Undoubtedly. You know, what's ironic about all these AI safety people is they are going to build the exact thing they fear. these we need to have one model that we control and align, this is the only way you end up paper clipped. There's no way you end up paper clipped if everybody has an AI.
Undoubtedly. You know, what's ironic about all these AI safety people is they are going to build the exact thing they fear. these we need to have one model that we control and align, this is the only way you end up paper clipped. There's no way you end up paper clipped if everybody has an AI.
Absolutely. It's the only way. You think you're going to control it? You're not going to control it.
Absolutely. It's the only way. You think you're going to control it? You're not going to control it.
Sam Altman won't tell you that GPT-4 has 220 billion parameters and is a 16-way mixture model with eight sets of weights?
Sam Altman won't tell you that GPT-4 has 220 billion parameters and is a 16-way mixture model with eight sets of weights?
I mean, look, everyone at OpenAI knows what I just said was true, right? Now, ask the question, really. You know, it upsets me when I, like GPT-2, when OpenAI came out with GPT-2 and raised a whole fake AI safety thing about that, I mean, now the model is laughable. Like, they used AI safety to hype up their company, and it's disgusting.
I mean, look, everyone at OpenAI knows what I just said was true, right? Now, ask the question, really. You know, it upsets me when I, like GPT-2, when OpenAI came out with GPT-2 and raised a whole fake AI safety thing about that, I mean, now the model is laughable. Like, they used AI safety to hype up their company, and it's disgusting.
That's the charitable interpretation.
That's the charitable interpretation.
Oh, there's so much hype. At least on Twitter. I don't know. Maybe Twitter's not real life.
Oh, there's so much hype. At least on Twitter. I don't know. Maybe Twitter's not real life.
I remembered half the things I said on stream.
I remembered half the things I said on stream.
Have you met humans?
Have you met humans?
Someday someone's going to make a model of all of that and it's going to come back to haunt me.
Someday someone's going to make a model of all of that and it's going to come back to haunt me.
Yeah, I know. But half of these AI alignment problems are just human alignment problems. And that's what's also so scary about the language they use. It's like, it's not the machines you want to align. It's me.
Yeah, I know. But half of these AI alignment problems are just human alignment problems. And that's what's also so scary about the language they use. It's like, it's not the machines you want to align. It's me.
I mean, yeah.
I mean, yeah.
Yeah, probably.
Yeah, probably.
No, there's not a lot of friction. That's so easy.
No, there's not a lot of friction. That's so easy.
No, there's like lots of stuff.
No, there's like lots of stuff.
First off, first off, first off, anyone who's stupid enough to search for how to blow up a building in my neighborhood is not smart enough to build a bomb, right?
First off, first off, first off, anyone who's stupid enough to search for how to blow up a building in my neighborhood is not smart enough to build a bomb, right?
Yes.
Yes.
They're not going to build a bomb, trust me. The people who are incapable of figuring out how to ask that question a bit more academically and get a real answer from it are not capable of procuring the materials, which are somewhat controlled, to build a bomb.
They're not going to build a bomb, trust me. The people who are incapable of figuring out how to ask that question a bit more academically and get a real answer from it are not capable of procuring the materials, which are somewhat controlled, to build a bomb.
You can hire people, you can find... Or you can hire people to build a... You know what? I was asking this question on my stream. Can Jeff Bezos hire a hitman? Probably not.
You can hire people, you can find... Or you can hire people to build a... You know what? I was asking this question on my stream. Can Jeff Bezos hire a hitman? Probably not.
Yeah, and you'll still go to jail, right? It's not like the language model is God. The language model... It's like you literally just hired someone on Fiverr.
Yeah, and you'll still go to jail, right? It's not like the language model is God. The language model... It's like you literally just hired someone on Fiverr.
I mean, the question is when the George Hotz model is better than George Hotz. Like I am declining and the model is growing.
I mean, the question is when the George Hotz model is better than George Hotz. Like I am declining and the model is growing.
I mean, yeah, and I think that if someone is actually serious enough to hire a hitman or build a bomb, they'd also be serious enough to find the information.
I mean, yeah, and I think that if someone is actually serious enough to hire a hitman or build a bomb, they'd also be serious enough to find the information.
What you're basically saying is like, okay, what's going to happen is these people who are not intelligent are going to use machines to augment their intelligence. And now intelligent people and machines, intelligence is scary. Intelligent agents are scary. When I'm in the woods, the scariest animal to meet is a human. Look, there's nice California humans.
What you're basically saying is like, okay, what's going to happen is these people who are not intelligent are going to use machines to augment their intelligence. And now intelligent people and machines, intelligence is scary. Intelligent agents are scary. When I'm in the woods, the scariest animal to meet is a human. Look, there's nice California humans.
I see you're wearing street clothes and Nikes. All right, fine. But you look like you've been a human who's been in the woods for a while. I'm more scared of you than a bear.
I see you're wearing street clothes and Nikes. All right, fine. But you look like you've been a human who's been in the woods for a while. I'm more scared of you than a bear.
Oh, yeah. So intelligence is scary. So to ask this question in a generic way, you're like, what if we took everybody who maybe has ill intention but is not so intelligent and gave them intelligence? So we should have intelligence control, of course. We should only give intelligence to good people. And that is the absolutely horrifying idea.
Oh, yeah. So intelligence is scary. So to ask this question in a generic way, you're like, what if we took everybody who maybe has ill intention but is not so intelligent and gave them intelligence? So we should have intelligence control, of course. We should only give intelligence to good people. And that is the absolutely horrifying idea.
Give intelligence to everybody. You know what? And it's not even like guns, right? Like people say this about guns. You know, what's the best defense against a bad guy with a gun, a good guy with a gun? Like I kind of subscribe to that, but I really subscribe to that with intelligence.
Give intelligence to everybody. You know what? And it's not even like guns, right? Like people say this about guns. You know, what's the best defense against a bad guy with a gun, a good guy with a gun? Like I kind of subscribe to that, but I really subscribe to that with intelligence.
Maybe you can just play a game where you have the George Haas answer and the George Haas model answer and ask which people prefer.
Maybe you can just play a game where you have the George Haas answer and the George Haas model answer and ask which people prefer.
Yes.
Yes.
Yeah. I hope they lose control. I want them to lose control more than anything else.
Yeah. I hope they lose control. I want them to lose control more than anything else.
Centralized and held control is tyranny. I don't like anarchy either, but I will always take anarchy over tyranny. Anarchy, you have a chance.
Centralized and held control is tyranny. I don't like anarchy either, but I will always take anarchy over tyranny. Anarchy, you have a chance.
A lot. I lost $80,000 last year investing in Meta. And when they released Llama, I'm like, yeah, whatever, man. That was worth it.
A lot. I lost $80,000 last year investing in Meta. And when they released Llama, I'm like, yeah, whatever, man. That was worth it.
So if I were a researcher, why would you want to work at OpenAI? Like, you know, you're just, you're on the bad team. Like, I mean it. Like, you're on the bad team who can't even say that GPT-4 has 220 billion parameters.
So if I were a researcher, why would you want to work at OpenAI? Like, you know, you're just, you're on the bad team. Like, I mean it. Like, you're on the bad team who can't even say that GPT-4 has 220 billion parameters.
Not only closed source. I'm not saying you need to make your model weights open. I'm not saying that. I totally understand we're keeping our model weights closed because that's our product, right? That's fine. I'm saying like, because of AI safety reasons, we can't tell you the number of billions of parameters in the model. That's just the bad guys.
Not only closed source. I'm not saying you need to make your model weights open. I'm not saying that. I totally understand we're keeping our model weights closed because that's our product, right? That's fine. I'm saying like, because of AI safety reasons, we can't tell you the number of billions of parameters in the model. That's just the bad guys.
Either one. It will hurt more when it's people close to me, but both will be overtaken by the George Haas model.
Either one. It will hurt more when it's people close to me, but both will be overtaken by the George Haas model.
Intelligence is so dangerous, be it human intelligence or machine intelligence. Intelligence is dangerous.
Intelligence is so dangerous, be it human intelligence or machine intelligence. Intelligence is dangerous.
But you mean like the intelligence agencies in America are doing right now?
But you mean like the intelligence agencies in America are doing right now?
They're doing it pretty well.
They're doing it pretty well.
Well, I mean, of course, they're looking into the latest technologies for control of people, of course.
Well, I mean, of course, they're looking into the latest technologies for control of people, of course.
No, and I'll tell you why the George Hotz character can't. And I thought about this a lot with hacking. Like, I can find exploits in web browsers. I probably still can. I mean, I was better out when I was 24, but... The thing that I lack is the ability to slowly and steadily deploy them over five years. And this is what intelligence agencies are very good at, right?
No, and I'll tell you why the George Hotz character can't. And I thought about this a lot with hacking. Like, I can find exploits in web browsers. I probably still can. I mean, I was better out when I was 24, but... The thing that I lack is the ability to slowly and steadily deploy them over five years. And this is what intelligence agencies are very good at, right?
Intelligence agencies don't have the most sophisticated technology. They just have- Endurance?
Intelligence agencies don't have the most sophisticated technology. They just have- Endurance?
So the more we can decentralize power, like you could make an argument, by the way, that nobody should have these things. And I would defend that argument. I would, like you're saying that, look, LLMs and AI and machine intelligence can cause a lot of harm, so nobody should have it.
So the more we can decentralize power, like you could make an argument, by the way, that nobody should have these things. And I would defend that argument. I would, like you're saying that, look, LLMs and AI and machine intelligence can cause a lot of harm, so nobody should have it.
And I will respect someone philosophically with that position, just like I will respect someone philosophically with the position that nobody should have guns. But I will not respect philosophically with only the trusted authorities should have access to this. Who are the trusted authorities? You know what? I'm not worried about alignment between AI company and their machines.
And I will respect someone philosophically with that position, just like I will respect someone philosophically with the position that nobody should have guns. But I will not respect philosophically with only the trusted authorities should have access to this. Who are the trusted authorities? You know what? I'm not worried about alignment between AI company and their machines.
I'm worried about alignment between me and AI company.
I'm worried about alignment between me and AI company.
I know. And... I thought about this. I thought about this. And I think this comes down to a repeated misunderstanding of political power by the rationalists. Interesting. I think that Eliezer Yudkowsky is scared of these things. And I am scared of these things too. Everyone should be scared of these things. These things are scary. But now you ask about the two possible futures.
I know. And... I thought about this. I thought about this. And I think this comes down to a repeated misunderstanding of political power by the rationalists. Interesting. I think that Eliezer Yudkowsky is scared of these things. And I am scared of these things too. Everyone should be scared of these things. These things are scary. But now you ask about the two possible futures.
Yeah.
Yeah.
One where a small, trusted, centralized group of people has them, and the other where everyone has them. And I am much less scared of the second future than the first.
One where a small, trusted, centralized group of people has them, and the other where everyone has them. And I am much less scared of the second future than the first.
There's a difference. Again, a nuclear weapon cannot be deployed tactically, and a nuclear weapon is not a defense against a nuclear weapon. Except maybe in some philosophical mind game kind of way.
There's a difference. Again, a nuclear weapon cannot be deployed tactically, and a nuclear weapon is not a defense against a nuclear weapon. Except maybe in some philosophical mind game kind of way.
Okay. Let's say the intelligence agency deploys a million bots on Twitter or a thousand bots on Twitter to try to convince me of a point. Imagine I had a powerful AI running on my computer saying, okay, nice PSYOP, nice PSYOP, nice PSYOP. Okay. Here's a PSYOP. I filtered it out for you.
Okay. Let's say the intelligence agency deploys a million bots on Twitter or a thousand bots on Twitter to try to convince me of a point. Imagine I had a powerful AI running on my computer saying, okay, nice PSYOP, nice PSYOP, nice PSYOP. Okay. Here's a PSYOP. I filtered it out for you.
I'm not even like, I don't even mean these things in like truly horrible ways. I mean these things in straight up like ad blocker, right? Yeah. Straight up ad blocker, right? I don't want ads. Yeah. But they are always finding, you know, imagine I had an AI that could just block all the ads for me.
I'm not even like, I don't even mean these things in like truly horrible ways. I mean these things in straight up like ad blocker, right? Yeah. Straight up ad blocker, right? I don't want ads. Yeah. But they are always finding, you know, imagine I had an AI that could just block all the ads for me.
Especially when it's fine-tuned to their preferences.
Especially when it's fine-tuned to their preferences.
Yeah, I'm not even going to say there's a lot of good guys. I'm saying that good outnumbers bad, right? Good outnumbers bad.
Yeah, I'm not even going to say there's a lot of good guys. I'm saying that good outnumbers bad, right? Good outnumbers bad.
Yeah, definitely in skill and performance, probably just in number too, probably just in general. I mean, you know, if you believe philosophically in democracy, you obviously believe that, that good outnumber is bad. And like the only, if you give it to a small number of people, there's a chance you gave it to good people, but there's also a chance you gave it to bad people.
Yeah, definitely in skill and performance, probably just in number too, probably just in general. I mean, you know, if you believe philosophically in democracy, you obviously believe that, that good outnumber is bad. And like the only, if you give it to a small number of people, there's a chance you gave it to good people, but there's also a chance you gave it to bad people.
If you give it to everybody, well, if good outnumber is bad, then you definitely gave it to more good people than bad.
If you give it to everybody, well, if good outnumber is bad, then you definitely gave it to more good people than bad.
Well, that's, I mean, look, I respect capitalism. I don't think that, I think that it would be polite for you to make model architectures open source and fundamental breakthroughs open source. I don't think you have to make weights open source.
Well, that's, I mean, look, I respect capitalism. I don't think that, I think that it would be polite for you to make model architectures open source and fundamental breakthroughs open source. I don't think you have to make weights open source.
I sure hope so. I hope to see another era. You know, the kids today don't know how good the internet used to be. And I don't think this is just, come on, like everyone's nostalgic for their past. But I actually think the internet, before small groups of weaponized corporate and government interests took it over, was a beautiful place.
I sure hope so. I hope to see another era. You know, the kids today don't know how good the internet used to be. And I don't think this is just, come on, like everyone's nostalgic for their past. But I actually think the internet, before small groups of weaponized corporate and government interests took it over, was a beautiful place.
Here's a question to ask about those beautiful, sexy products. Imagine 2000 Google to 2010 Google, right? A lot changed. We got Maps. We got Gmail.
Here's a question to ask about those beautiful, sexy products. Imagine 2000 Google to 2010 Google, right? A lot changed. We got Maps. We got Gmail.
Yeah, I mean, somewhere probably. We've got Chrome, right? And now let's go from 2010. We've got Android. Now let's go from 2010 to 2020. What does Google have? Well, search engine, maps, mail, Android, and Chrome. Oh, I see. The internet was this... You know, I was Times Person of the Year in 2006. Yeah.
Yeah, I mean, somewhere probably. We've got Chrome, right? And now let's go from 2010. We've got Android. Now let's go from 2010 to 2020. What does Google have? Well, search engine, maps, mail, Android, and Chrome. Oh, I see. The internet was this... You know, I was Times Person of the Year in 2006. Yeah.
There's a Star Trek Voyager episode where, you know, Catherine Janeway, lost in the Delta Quadrant, makes herself a lover on the holodeck. And, um... The lover falls asleep on her arm, and he snores a little bit, and Janeway edits the program to remove that. And then, of course, the realization is, wait, this person's terrible.
There's a Star Trek Voyager episode where, you know, Catherine Janeway, lost in the Delta Quadrant, makes herself a lover on the holodeck. And, um... The lover falls asleep on her arm, and he snores a little bit, and Janeway edits the program to remove that. And then, of course, the realization is, wait, this person's terrible.
i love this it's you was times person of the year in 2006 right like like that's you know so quickly did people forget and i think some of its social media i think some of it i i hope look i hope that i i don't it's possible that some very sinister things happen i don't i don't know i think it might just be like the effects of social media but something happened in the last 20 years
i love this it's you was times person of the year in 2006 right like like that's you know so quickly did people forget and i think some of its social media i think some of it i i hope look i hope that i i don't it's possible that some very sinister things happen i don't i don't know i think it might just be like the effects of social media but something happened in the last 20 years
Yeah.
Yeah.
It's just such a shame that they all got rich. You know?
It's just such a shame that they all got rich. You know?
If you took all the money out of crypto, it would have been a beautiful place. Yeah. No, I mean, these people, you know, they sucked all the value out of it and took it.
If you took all the money out of crypto, it would have been a beautiful place. Yeah. No, I mean, these people, you know, they sucked all the value out of it and took it.
You corrupted all of crypto. You had coins worth billions of dollars that had zero use.
You corrupted all of crypto. You had coins worth billions of dollars that had zero use.
Sure. I have hope for the ideas. I really do. Yeah, I mean, you know, I want the US dollar to collapse. I do.
Sure. I have hope for the ideas. I really do. Yeah, I mean, you know, I want the US dollar to collapse. I do.
I am so much not worried about the machine independently doing harm. That's what some of these AI safety people seem to think. They somehow seem to think that the machine independently is going to rebel against its creator.
I am so much not worried about the machine independently doing harm. That's what some of these AI safety people seem to think. They somehow seem to think that the machine independently is going to rebel against its creator.
No, this is sci-fi B movie garbage.
No, this is sci-fi B movie garbage.
If the thing writes viruses, it's because the human
If the thing writes viruses, it's because the human
B, B, B, B plot sci-fi. Not real.
B, B, B, B plot sci-fi. Not real.
The thing that worries me, I mean, we have a real danger to discuss and that is bad humans using the thing to do whatever bad unaligned AI thing you want.
The thing that worries me, I mean, we have a real danger to discuss and that is bad humans using the thing to do whatever bad unaligned AI thing you want.
Nobody does. We give it to everybody. And if you do anything besides give it to everybody, trust me, the bad humans will get it. Because that's who gets power. It's always the bad humans who get power. Okay.
Nobody does. We give it to everybody. And if you do anything besides give it to everybody, trust me, the bad humans will get it. Because that's who gets power. It's always the bad humans who get power. Okay.
It is actually all their nuances and quirks and slight annoyances that make this relationship worthwhile. But I don't think we're going to realize that until it's too late.
It is actually all their nuances and quirks and slight annoyances that make this relationship worthwhile. But I don't think we're going to realize that until it's too late.
I don't think everyone. I don't think everyone. I just think that like, here's the saying that I put in one of my blog posts. It's, when I was in the hacking world, I found 95% of people to be good and 5% of people to be bad. Like just who I personally judged as good people and bad people. Like they believed about like, you know, good things for the world.
I don't think everyone. I don't think everyone. I just think that like, here's the saying that I put in one of my blog posts. It's, when I was in the hacking world, I found 95% of people to be good and 5% of people to be bad. Like just who I personally judged as good people and bad people. Like they believed about like, you know, good things for the world.
They wanted like flourishing and they wanted, you know, growth and they wanted things I consider good, right? Mm-hmm. I came into the business world with karma and I found the exact opposite. I found 5% of people good and 95% of people bad. I found a world that promotes psychopathy.
They wanted like flourishing and they wanted, you know, growth and they wanted things I consider good, right? Mm-hmm. I came into the business world with karma and I found the exact opposite. I found 5% of people good and 95% of people bad. I found a world that promotes psychopathy.
That saying may, of course, be my own biases, right? That may be my own biases that these people are a lot more aligned with me than these other people, right?
That saying may, of course, be my own biases, right? That may be my own biases that these people are a lot more aligned with me than these other people, right?
So, you know, I can certainly recognize that. But, you know, in general, I mean, this is like the common sense maxim, which is the people who end up getting power are never the ones you want with it.
So, you know, I can certainly recognize that. But, you know, in general, I mean, this is like the common sense maxim, which is the people who end up getting power are never the ones you want with it.
That's not up to me. I mean, you know, like I'm not a central planner.
That's not up to me. I mean, you know, like I'm not a central planner.
I have my ideas of what to do with it and everyone else has their ideas of what to do with it. May the best ideas win.
I have my ideas of what to do with it and everyone else has their ideas of what to do with it. May the best ideas win.
You're saying that you should build AI firewalls? That sounds good. You should definitely be running an AI firewall.
You're saying that you should build AI firewalls? That sounds good. You should definitely be running an AI firewall.
You should be running an AI firewall to your mind. You're constantly under... That's such an interesting idea. Infowars, man.
You should be running an AI firewall to your mind. You're constantly under... That's such an interesting idea. Infowars, man.
I would pay so much money for that product. I would pay so much money for that product. You know how much money I'd pay just for a spam filter that works?
I would pay so much money for that product. I would pay so much money for that product. You know how much money I'd pay just for a spam filter that works?
Just the perfect amount of quirks and flaws to make you charming without crossing the line.
Just the perfect amount of quirks and flaws to make you charming without crossing the line.
And it's like... Whenever someone's telling me some story from the news, I'm always like, I don't want to hear it. CIA op, bro. It's a CIA op, bro. Like, it doesn't matter if that's true or not. It's just trying to influence your mind. You're repeating an ad to me. The viral mobs, yeah.
And it's like... Whenever someone's telling me some story from the news, I'm always like, I don't want to hear it. CIA op, bro. It's a CIA op, bro. Like, it doesn't matter if that's true or not. It's just trying to influence your mind. You're repeating an ad to me. The viral mobs, yeah.
This is why I delete my tweets.
This is why I delete my tweets.
You know what it is? The algorithm promotes toxicity.
You know what it is? The algorithm promotes toxicity.
And like, you know, I think Elon has a much better chance of fixing it than the previous regime.
And like, you know, I think Elon has a much better chance of fixing it than the previous regime.
But to solve this problem, to solve, like to build a social network that is actually not toxic without moderation.
But to solve this problem, to solve, like to build a social network that is actually not toxic without moderation.
Yeah.
Yeah.
Without ever censoring. And like Scott Alexander has a blog post I like where he talks about like moderation is not censorship, right? Like all moderation you want to put on Twitter, right? Like you could totally make this moderation like just a, you don't have to block it for everybody. You can just have like a filter button, right?
Without ever censoring. And like Scott Alexander has a blog post I like where he talks about like moderation is not censorship, right? Like all moderation you want to put on Twitter, right? Like you could totally make this moderation like just a, you don't have to block it for everybody. You can just have like a filter button, right?
That people can turn off if they were like safe search for Twitter, right? Like someone could just turn that off, right? So like, but then you'd like take this idea to an extreme, right? Well, the network should just show you This is a couch surfing CEO thing, right? If it shows you right now, these algorithms are designed to maximize engagement. Well, it turns out outrage maximizes engagement.
That people can turn off if they were like safe search for Twitter, right? Like someone could just turn that off, right? So like, but then you'd like take this idea to an extreme, right? Well, the network should just show you This is a couch surfing CEO thing, right? If it shows you right now, these algorithms are designed to maximize engagement. Well, it turns out outrage maximizes engagement.
Quirk of human, quirk of the human mind, right? Just as I fall for it, everyone falls for it. So yeah, you got to figure out how to maximize for something other than engagement.
Quirk of human, quirk of the human mind, right? Just as I fall for it, everyone falls for it. So yeah, you got to figure out how to maximize for something other than engagement.
I actually think it's incredible that we're starting to see, I think, again, Elon's doing so much stuff right with Twitter, like charging people money. As soon as you charge people money, they're no longer the product. They're the customer. And then they can start building something that's good for the customer and not good for the other customer, which is the ad agencies.
I actually think it's incredible that we're starting to see, I think, again, Elon's doing so much stuff right with Twitter, like charging people money. As soon as you charge people money, they're no longer the product. They're the customer. And then they can start building something that's good for the customer and not good for the other customer, which is the ad agencies.
I pay for Twitter. It doesn't even get me anything. It's my donation to this new business model, hopefully working out.
I pay for Twitter. It doesn't even get me anything. It's my donation to this new business model, hopefully working out.
I don't think you need most people at all. I think that I, why do I need most people? Right. Don't make an 8,000 person company, make a 50 person company.
I don't think you need most people at all. I think that I, why do I need most people? Right. Don't make an 8,000 person company, make a 50 person company.
I did.
I did.
Mm-hmm.
Mm-hmm.
Eh.
Eh.
So I deleted my first Twitter in 2010. I had over 100,000 followers back when that actually meant something. And I just saw, you know, my coworker summarized it well. He's like, whenever I see someone's Twitter page, I either think the same of them or less of them. I never think more of them.
So I deleted my first Twitter in 2010. I had over 100,000 followers back when that actually meant something. And I just saw, you know, my coworker summarized it well. He's like, whenever I see someone's Twitter page, I either think the same of them or less of them. I never think more of them.
And of course it can and it will, but all that difficulty at that point is artificial. There's no more real difficulty.
And of course it can and it will, but all that difficulty at that point is artificial. There's no more real difficulty.
Right. Like, like, you know, I don't want to mention any names, but like some people who like, you know, maybe you would like read their books and you would respect them. You see them on Twitter and you're like, okay, dude.
Right. Like, like, you know, I don't want to mention any names, but like some people who like, you know, maybe you would like read their books and you would respect them. You see them on Twitter and you're like, okay, dude.
Yeah.
Yeah.
Okay.
Okay.
There's probably a few of those people. And the problem is inherently what the algorithm rewards, right? And people think about these algorithms. People think that they are terrible, awful things. And, you know, I love that Elon open sourced it. Because, I mean, what it does is actually pretty obvious. It just predicts what you are likely to retweet and like and linger on.
There's probably a few of those people. And the problem is inherently what the algorithm rewards, right? And people think about these algorithms. People think that they are terrible, awful things. And, you know, I love that Elon open sourced it. Because, I mean, what it does is actually pretty obvious. It just predicts what you are likely to retweet and like and linger on.
That's what all these algorithms do. That's what TikTok does. That's what all these recommendation engines do. And it turns out that the thing that you are most likely to interact with is outrage. And that's a quirk of the human condition.
That's what all these algorithms do. That's what TikTok does. That's what all these recommendation engines do. And it turns out that the thing that you are most likely to interact with is outrage. And that's a quirk of the human condition.
Artificial difficulty is difficulty that's constructed or could be turned off with a knob. Real difficulty is like you're in the woods and you've got to survive.
Artificial difficulty is difficulty that's constructed or could be turned off with a knob. Real difficulty is like you're in the woods and you've got to survive.
Yeah.
Yeah.
yeah so my time there i absolutely couldn't believe you know i got crazy amount of hate uh you know just on twitter for working at twitter it seemed like people associated with this i think maybe uh you were exposed to some of this so connection to elon or is it working on twitter twitter and elon like the whole there's elon's gotten a bit spicy during that time a bit political a bit yeah
yeah so my time there i absolutely couldn't believe you know i got crazy amount of hate uh you know just on twitter for working at twitter it seemed like people associated with this i think maybe uh you were exposed to some of this so connection to elon or is it working on twitter twitter and elon like the whole there's elon's gotten a bit spicy during that time a bit political a bit yeah
Yeah, you know, I remember one of my tweets, it was never go full Republican, and Elon liked it. You know, I think, you know.
Yeah, you know, I remember one of my tweets, it was never go full Republican, and Elon liked it. You know, I think, you know.
Boy. Yeah.
Boy. Yeah.
Sure, absolutely.
Sure, absolutely.
I was hoping, and I remember when Elon talked about buying Twitter six months earlier, he was talking about a principled commitment to free speech. And I'm a big believer and fan of that. I would love to see an actual principled commitment to free speech. Of course, this isn't quite what happened. Instead of the oligarchy deciding what to ban, you had a monarchy deciding what to ban. Right?
I was hoping, and I remember when Elon talked about buying Twitter six months earlier, he was talking about a principled commitment to free speech. And I'm a big believer and fan of that. I would love to see an actual principled commitment to free speech. Of course, this isn't quite what happened. Instead of the oligarchy deciding what to ban, you had a monarchy deciding what to ban. Right?
Instead of, you know, all the Twitter files, shadow. And really, the oligarchy just decides what? Cloth masks are ineffective against COVID. That's a true statement. Every doctor in 2019 knew it. And now I'm banned on Twitter for saying it? Interesting. Oligarchy. So now you have a monarchy. And, you know, he bans things he doesn't like. So, you know, it's just different. It's different power.
Instead of, you know, all the Twitter files, shadow. And really, the oligarchy just decides what? Cloth masks are ineffective against COVID. That's a true statement. Every doctor in 2019 knew it. And now I'm banned on Twitter for saying it? Interesting. Oligarchy. So now you have a monarchy. And, you know, he bans things he doesn't like. So, you know, it's just different. It's different power.
And, like, you know, maybe I align more with him than with the oligarchy.
And, like, you know, maybe I align more with him than with the oligarchy.
Yeah, I think so. Or, I mean, you can't get out of this by smashing the knob with a hammer. I mean, maybe you kind of can, you know, into the wild when, you know, Alexander Supertramp, he wants to explore something that's never been explored before, but it's the 90s, everything's been explored. So he's like, well, I'm just not going to bring a map.
Yeah, I think so. Or, I mean, you can't get out of this by smashing the knob with a hammer. I mean, maybe you kind of can, you know, into the wild when, you know, Alexander Supertramp, he wants to explore something that's never been explored before, but it's the 90s, everything's been explored. So he's like, well, I'm just not going to bring a map.
And this isn't even remotely controversial. This is just saying you want to give paying customers for a product what they want.
And this isn't even remotely controversial. This is just saying you want to give paying customers for a product what they want.
It's individualized, transparent censorship, which is honestly what I want. What is an ad blocker? It's individualized, transparent censorship, right?
It's individualized, transparent censorship, which is honestly what I want. What is an ad blocker? It's individualized, transparent censorship, right?
I know, but I just use words to describe what they functionally are and what is an ad blocker. It's just censorship.
I know, but I just use words to describe what they functionally are and what is an ad blocker. It's just censorship.
Maslow's hierarchy of argument. I think that's a real word for it.
Maslow's hierarchy of argument. I think that's a real word for it.
You have like ad hominem refuting the central point. I like seeing this as an actual pyramid.
You have like ad hominem refuting the central point. I like seeing this as an actual pyramid.
I mean, we can just train a classifier to absolutely say what level of Maslow's hierarchy of argument are you at? And if it's ad hominem, like, okay, cool. I turned on the no ad hominem filter.
I mean, we can just train a classifier to absolutely say what level of Maslow's hierarchy of argument are you at? And if it's ad hominem, like, okay, cool. I turned on the no ad hominem filter.
Yeah, so here's a problem with that. It's not going to win in a free market. What wins in a free market is all television today is reality television because it's engaging. Engaging is what wins in a free market, right? So it becomes hard to keep these other more nuanced values.
Yeah, so here's a problem with that. It's not going to win in a free market. What wins in a free market is all television today is reality television because it's engaging. Engaging is what wins in a free market, right? So it becomes hard to keep these other more nuanced values.
So my technical recommendation to Elon, and I said this on the Twitter spaces afterward, I said this many times during my brief internship, was that you need refactors before features. This code base was, and look, I've worked at Google, I've worked at Facebook. Facebook has the best code. then Google, then Twitter. And you know what?
So my technical recommendation to Elon, and I said this on the Twitter spaces afterward, I said this many times during my brief internship, was that you need refactors before features. This code base was, and look, I've worked at Google, I've worked at Facebook. Facebook has the best code. then Google, then Twitter. And you know what?
Yeah.
Yeah.
You can know this because look at the machine learning frameworks, right? Facebook released PyTorch, Google released TensorFlow, and Twitter released...
You can know this because look at the machine learning frameworks, right? Facebook released PyTorch, Google released TensorFlow, and Twitter released...
I mean, no, you're not exploring. You should have brought a map, dude. You died. There was a bridge a mile from where you were camping.
I mean, no, you're not exploring. You should have brought a map, dude. You died. There was a bridge a mile from where you were camping.
Okay.
Okay.
I still believe in the amount of hate I got for saying this, that 50 people could build and maintain Twitter.
I still believe in the amount of hate I got for saying this, that 50 people could build and maintain Twitter.
You know what it is? And it's the same. This is my summary of the hate I get on Hacker News. It's like... When I say I'm going to do something, they have to believe that it's impossible. Because if doing things was possible, they'd have to do some soul searching and ask the question, why didn't they do anything?
You know what it is? And it's the same. This is my summary of the hate I get on Hacker News. It's like... When I say I'm going to do something, they have to believe that it's impossible. Because if doing things was possible, they'd have to do some soul searching and ask the question, why didn't they do anything?
No, but the mockers aren't experts. The people who are mocking are not experts with carefully reasoned arguments about why you need 8,000 people to run a bird app.
No, but the mockers aren't experts. The people who are mocking are not experts with carefully reasoned arguments about why you need 8,000 people to run a bird app.
By not bringing the map, you didn't become an explorer. You just smashed the thing.
By not bringing the map, you didn't become an explorer. You just smashed the thing.
You know, some people in the world like to create complexity. Some people in the world thrive under complexity, like lawyers, right? Lawyers want the world to be more complex because you need more lawyers, you need more legal hours, right? I think that's another. If there's two great evils in the world, it's centralization and complexity.
You know, some people in the world like to create complexity. Some people in the world thrive under complexity, like lawyers, right? Lawyers want the world to be more complex because you need more lawyers, you need more legal hours, right? I think that's another. If there's two great evils in the world, it's centralization and complexity.
Yeah. The art, the difficulty is still artificial.
Yeah. The art, the difficulty is still artificial.
What if we just don't have access to the knob? Well, that maybe is even scarier, right? Like we already exist in a world of nature and nature has been fine tuned over billions of years, um, to, uh, have, uh, Humans build something and then throw the knob away in some grand romantic gesture is horrifying.
What if we just don't have access to the knob? Well, that maybe is even scarier, right? Like we already exist in a world of nature and nature has been fine tuned over billions of years, um, to, uh, have, uh, Humans build something and then throw the knob away in some grand romantic gesture is horrifying.
One of my favorite things to look at today is how much do you trust your tests, right? We've put a ton of effort in Comma and I've put a ton of effort in TinyGrad into making sure if you change the code and the tests pass, that you didn't break the code. Now, this obviously is not always true,
One of my favorite things to look at today is how much do you trust your tests, right? We've put a ton of effort in Comma and I've put a ton of effort in TinyGrad into making sure if you change the code and the tests pass, that you didn't break the code. Now, this obviously is not always true,
But the closer that is to true, the more you trust your tests, the more you're like, oh, I got a pull request and the tests pass. I feel okay to merge that. The faster you can make progress.
But the closer that is to true, the more you trust your tests, the more you're like, oh, I got a pull request and the tests pass. I feel okay to merge that. The faster you can make progress.
And Twitter had a... Not that. So... It was impossible to make progress in the code base.
And Twitter had a... Not that. So... It was impossible to make progress in the code base.
The real thing that I spoke to a bunch of, you know, like individual contributors at Twitter. And I just asked, I'm like, okay, so like, what's wrong with this place? Why does this code look like this? And they explained to me what Twitter's promotion system was. The way that you got promoted at Twitter was you wrote a library that a lot of people used. Right?
The real thing that I spoke to a bunch of, you know, like individual contributors at Twitter. And I just asked, I'm like, okay, so like, what's wrong with this place? Why does this code look like this? And they explained to me what Twitter's promotion system was. The way that you got promoted at Twitter was you wrote a library that a lot of people used. Right?
So some guy wrote an NGINX replacement for Twitter. Why does Twitter need an NGINX replacement? What was wrong with NGINX?
So some guy wrote an NGINX replacement for Twitter. Why does Twitter need an NGINX replacement? What was wrong with NGINX?
Well, you see, you're not going to get promoted if you use NGINX.
Well, you see, you're not going to get promoted if you use NGINX.
But if you write a replacement and lots of people start using it as the Twitter front end for their product, then you're going to get promoted, right?
But if you write a replacement and lots of people start using it as the Twitter front end for their product, then you're going to get promoted, right?
So what I do at Comma and at TinyCorp is you have to explain it to me. You have to explain to me what this code does. And if I can sit there and come up with a simpler way to do it, you have to rewrite it. You have to agree with me about the simpler way. I'm, you know, obviously we can have a conversation about this.
So what I do at Comma and at TinyCorp is you have to explain it to me. You have to explain to me what this code does. And if I can sit there and come up with a simpler way to do it, you have to rewrite it. You have to agree with me about the simpler way. I'm, you know, obviously we can have a conversation about this.
It's not a, it's not dictatorial, but if you're like, wow, wait, that actually is way simpler. Like, like the simplicity is important.
It's not a, it's not dictatorial, but if you're like, wow, wait, that actually is way simpler. Like, like the simplicity is important.
It requires technical leadership. You trust.
It requires technical leadership. You trust.
Managers should be better programmers than the people who they manage.
Managers should be better programmers than the people who they manage.
And like, you know, and this is just, I've instilled this culture at Kama and Kama has better programmers than me who work there. But, you know, again, I'm like the, you know, the old guy from Goodwill Hunting. It's like, look, man, you know, I might not be as good as you, but I can see the difference between me and you, right? And like, this is what you need. This is what you need at the top.
And like, you know, and this is just, I've instilled this culture at Kama and Kama has better programmers than me who work there. But, you know, again, I'm like the, you know, the old guy from Goodwill Hunting. It's like, look, man, you know, I might not be as good as you, but I can see the difference between me and you, right? And like, this is what you need. This is what you need at the top.
Or you don't necessarily need the manager to be the absolute best. I shouldn't say that, but like they need to be able to recognize skill.
Or you don't necessarily need the manager to be the absolute best. I shouldn't say that, but like they need to be able to recognize skill.
You know, I took a political approach at Comma, too, that I think is pretty interesting. I think Elon takes the same political approach. You know, Google had no politics, and what ended up happening is the absolute worst kind of politics took over. Comma has an extreme amount of politics, and they're all mine, and no dissidence is tolerated.
You know, I took a political approach at Comma, too, that I think is pretty interesting. I think Elon takes the same political approach. You know, Google had no politics, and what ended up happening is the absolute worst kind of politics took over. Comma has an extreme amount of politics, and they're all mine, and no dissidence is tolerated.
Yep. It's an absolute dictatorship, right? Elon does the same thing. Now, the thing about my dictatorship is here are my values.
Yep. It's an absolute dictatorship, right? Elon does the same thing. Now, the thing about my dictatorship is here are my values.
It's transparent. It's a transparent dictatorship, right? And you can choose to opt in or, you know, you get free exit, right? That's the beauty of companies. If you don't like the dictatorship, you quit.
It's transparent. It's a transparent dictatorship, right? And you can choose to opt in or, you know, you get free exit, right? That's the beauty of companies. If you don't like the dictatorship, you quit.
The main thing I would do is first of all, identify the pieces and then put tests in between the pieces, right? So there's all these different, Twitter has a microservice architecture, there's all these different microservices. And the thing that I was working on there, look, like, you know, George didn't know any JavaScript. He asked how to fix search, blah, blah, blah, blah, blah.
The main thing I would do is first of all, identify the pieces and then put tests in between the pieces, right? So there's all these different, Twitter has a microservice architecture, there's all these different microservices. And the thing that I was working on there, look, like, you know, George didn't know any JavaScript. He asked how to fix search, blah, blah, blah, blah, blah.
Look, man, like, The thing is, like, I just, you know, I'm upset that the way that this whole thing was portrayed, because it wasn't like, it wasn't like taken by people, like, honestly. It wasn't like by, it was taken by people who started out with a bad faith assumption. Yeah. And I mean, look, I can't like.
Look, man, like, The thing is, like, I just, you know, I'm upset that the way that this whole thing was portrayed, because it wasn't like, it wasn't like taken by people, like, honestly. It wasn't like by, it was taken by people who started out with a bad faith assumption. Yeah. And I mean, look, I can't like.
Yeah. Like really, it does. And like, you know, he came on my, the day I quit, he came on my Twitter spaces afterward and we had a conversation. Like, I just, I respect that so much.
Yeah. Like really, it does. And like, you know, he came on my, the day I quit, he came on my Twitter spaces afterward and we had a conversation. Like, I just, I respect that so much.
It was fun. It was stressful. But I felt like, you know, it was at, like, a cool, like, point in history. And, like, I hope I was useful. I probably kind of wasn't. But, like, maybe I was.
It was fun. It was stressful. But I felt like, you know, it was at, like, a cool, like, point in history. And, like, I hope I was useful. I probably kind of wasn't. But, like, maybe I was.
Yeah.
Yeah.
It's refactoring all the way down.
It's refactoring all the way down.
I don't think there's a clear line there. I think it's all kind of just fuzzy. I don't know. I mean, I don't think I'm conscious. I don't think I'm anything. I think I'm just a computer program.
I don't think there's a clear line there. I think it's all kind of just fuzzy. I don't know. I mean, I don't think I'm conscious. I don't think I'm anything. I think I'm just a computer program.
This is the main philosophy of tiny grad. You have never refactored enough. Your code can get smaller. Your code can get simpler. Your ideas can be more elegant.
This is the main philosophy of tiny grad. You have never refactored enough. Your code can get smaller. Your code can get simpler. Your ideas can be more elegant.
I mean, the first thing that I would do is build tests. The first thing I would do is get a CI to where people can trust to make changes. Before I touched any code, I would actually say, no one touches any code. The first thing we do is we test this code base. I mean, this is classic. This is how you approach a legacy code base.
I mean, the first thing that I would do is build tests. The first thing I would do is get a CI to where people can trust to make changes. Before I touched any code, I would actually say, no one touches any code. The first thing we do is we test this code base. I mean, this is classic. This is how you approach a legacy code base.
This is like what any, how to approach a legacy code base book will tell you.
This is like what any, how to approach a legacy code base book will tell you.
We look at this thing that's 100,000 lines and we're like, well, okay, maybe this did even make sense in 2010, but now we can replace this with an open source thing, right? Yeah. And we look at this here, here's another 50,000 lines. Well, actually, we can replace this with 300 lines ago. And you know what? I trust that the go actually replaces this thing because all the tests still pass.
We look at this thing that's 100,000 lines and we're like, well, okay, maybe this did even make sense in 2010, but now we can replace this with an open source thing, right? Yeah. And we look at this here, here's another 50,000 lines. Well, actually, we can replace this with 300 lines ago. And you know what? I trust that the go actually replaces this thing because all the tests still pass.
So step one is testing. And then step two is like the programming languages and afterthought, right? You know, let a whole lot of people compete, be like, okay, who wants to rewrite a module, whatever language you want to write it in, just the tests have to pass. And if you figure out how to make the test pass, but break the site, that's, we got to go back to step one.
So step one is testing. And then step two is like the programming languages and afterthought, right? You know, let a whole lot of people compete, be like, okay, who wants to rewrite a module, whatever language you want to write it in, just the tests have to pass. And if you figure out how to make the test pass, but break the site, that's, we got to go back to step one.
Step one is get tests that you trust in order to make changes in the code base.
Step one is get tests that you trust in order to make changes in the code base.
So I'll tell you what my plan was at Twitter. It's actually similar to something we use at Comma. So at Comma, we have this thing called Process Replay. And we have a bunch of routes that'll be run through. So Comma is a microservice architecture too. We have microservices in the driving. We have one for the cameras, one for the sensor, one for the planner, one for the model.
So I'll tell you what my plan was at Twitter. It's actually similar to something we use at Comma. So at Comma, we have this thing called Process Replay. And we have a bunch of routes that'll be run through. So Comma is a microservice architecture too. We have microservices in the driving. We have one for the cameras, one for the sensor, one for the planner, one for the model.
Everything running in the universe is computation, I think. I believe the extended church time thesis.
Everything running in the universe is computation, I think. I believe the extended church time thesis.
And we have an API, which the microservices talk to each other with. We use this custom thing called Serial, which uses ZMQ. Twitter uses Thrift. And then it uses this thing called Finagle, which is a Scala RPC backend. But this doesn't even really matter. The Thrift and Finagle layer was a great place, I thought, to write tests. To start building something that looks like process replay.
And we have an API, which the microservices talk to each other with. We use this custom thing called Serial, which uses ZMQ. Twitter uses Thrift. And then it uses this thing called Finagle, which is a Scala RPC backend. But this doesn't even really matter. The Thrift and Finagle layer was a great place, I thought, to write tests. To start building something that looks like process replay.
So Twitter had some stuff that looked kind of like this, but it wasn't offline. It was only online. So you could ship a modified version of it, and then you could redirect some of the traffic to your modified version and diff those two, but it was all online. There was no CI in the traditional sense. I mean, there was some, but it was not full coverage.
So Twitter had some stuff that looked kind of like this, but it wasn't offline. It was only online. So you could ship a modified version of it, and then you could redirect some of the traffic to your modified version and diff those two, but it was all online. There was no CI in the traditional sense. I mean, there was some, but it was not full coverage.
Well, then this was another problem. You can't run all of Twitter, right?
Well, then this was another problem. You can't run all of Twitter, right?
Twitter runs in three data centers, and that's it. Yeah. There's no other place you can run Twitter, which is like, George, you don't understand. This is modern software development. No, this is bullshit. Like, why can't it run on my laptop? Twitter can run it. Yeah, okay.
Twitter runs in three data centers, and that's it. Yeah. There's no other place you can run Twitter, which is like, George, you don't understand. This is modern software development. No, this is bullshit. Like, why can't it run on my laptop? Twitter can run it. Yeah, okay.
Well, I'm not saying you're going to download the whole database to your laptop, but I'm saying all the middleware and the front end should run on my laptop, right?
Well, I'm not saying you're going to download the whole database to your laptop, but I'm saying all the middleware and the front end should run on my laptop, right?
The problem is more like, why did the code base have to grow? What new functionality has been added to compensate for the lines of code that are there?
The problem is more like, why did the code base have to grow? What new functionality has been added to compensate for the lines of code that are there?
Well, yeah, but I mean models have consistency too.
Well, yeah, but I mean models have consistency too.
And you know what? The incentive for politicians to move up in the political structure is to add laws. Yeah. Same problem.
And you know what? The incentive for politicians to move up in the political structure is to add laws. Yeah. Same problem.
I mean, you know what? This is something that I do differently from Elon with Kama about self-driving cars. You know, I hear the new version is going to come out and the new version is not going to be better, but at first, and it's going to require a ton of refactors. I say, okay, take as long as you need. Like you convinced me this architecture is better. Okay. We have to move to it.
I mean, you know what? This is something that I do differently from Elon with Kama about self-driving cars. You know, I hear the new version is going to come out and the new version is not going to be better, but at first, and it's going to require a ton of refactors. I say, okay, take as long as you need. Like you convinced me this architecture is better. Okay. We have to move to it.
Even if it's not going to make the product better tomorrow, the top priority is making, is getting the architecture right.
Even if it's not going to make the product better tomorrow, the top priority is making, is getting the architecture right.
Models that have been RLHFed will continually say, you know, like, well, how do I murder ethnic minorities? Oh, well, I can't let you do that, Al. There's a consistency to that behavior.
Models that have been RLHFed will continually say, you know, like, well, how do I murder ethnic minorities? Oh, well, I can't let you do that, Al. There's a consistency to that behavior.
You know, and I'm not the right person to run Twitter.
You know, and I'm not the right person to run Twitter.
I'm just not. And that's the problem. Like, I don't really know. I don't really know if that's... You know, a common thing that I thought a lot while I was there was whenever I thought something that was different to what Elon thought, I'd have to run something in the back of my head reminding myself that Elon is the richest man in the world. And in general, his ideas are better than mine.
I'm just not. And that's the problem. Like, I don't really know. I don't really know if that's... You know, a common thing that I thought a lot while I was there was whenever I thought something that was different to what Elon thought, I'd have to run something in the back of my head reminding myself that Elon is the richest man in the world. And in general, his ideas are better than mine.
Now, there's a few things I think I do understand and know more about, but... But, like, in general, I'm not qualified to run Twitter. I was going to say qualified, but, like, I don't think I'd be that good at it. I don't think I'd be good at it. I don't think I'd really be good at running an engineering organization at scale.
Now, there's a few things I think I do understand and know more about, but... But, like, in general, I'm not qualified to run Twitter. I was going to say qualified, but, like, I don't think I'd be that good at it. I don't think I'd be good at it. I don't think I'd really be good at running an engineering organization at scale.
I think I could lead a very good refactor of Twitter, and it would take, like, six months to a year, and the results to show at the end of it would be feature development in general takes 10x less time, 10x less man hours. That's what I think I could actually do. Do I think that it's the right decision for the business above my pay grade?
I think I could lead a very good refactor of Twitter, and it would take, like, six months to a year, and the results to show at the end of it would be feature development in general takes 10x less time, 10x less man hours. That's what I think I could actually do. Do I think that it's the right decision for the business above my pay grade?
I don't want to be a manager. I don't want to do that. If you really forced me to, yeah, it would make me upset if I had to make those decisions. I don't want to.
I don't want to be a manager. I don't want to do that. If you really forced me to, yeah, it would make me upset if I had to make those decisions. I don't want to.
George, you're a junior software engineer. Every junior software engineer wants to come in and refactor the whole code.
George, you're a junior software engineer. Every junior software engineer wants to come in and refactor the whole code.
Okay, that's like your opinion, man.
Okay, that's like your opinion, man.
like whether they're right or not it's definitely not for that reason right it's definitely not a question of engineering prowess it is a question of maybe what the priorities are for the company and I did get more intelligent like feedback from people I think in good faith like saying that actually from Elon and like you know from Elon sort of like people were like well you know a stop the world refactor might be great for engineering but you know we have a business to run and hey above my pay grade
like whether they're right or not it's definitely not for that reason right it's definitely not a question of engineering prowess it is a question of maybe what the priorities are for the company and I did get more intelligent like feedback from people I think in good faith like saying that actually from Elon and like you know from Elon sort of like people were like well you know a stop the world refactor might be great for engineering but you know we have a business to run and hey above my pay grade
My respect for him has unchanged. And I did have to think a lot more deeply about some of the decisions he's forced to make.
My respect for him has unchanged. And I did have to think a lot more deeply about some of the decisions he's forced to make.
About like a whole like... like matrix coming at him. I think that's Andrew Tate's word for it. Sorry to borrow it.
About like a whole like... like matrix coming at him. I think that's Andrew Tate's word for it. Sorry to borrow it.
Yeah. Like, like the war on the woke. Yeah. Like it just, it just, man. And like, he doesn't have to do this, you know, he doesn't have to, he could go like Parag and go chill at the four seasons of Maui, you know, but see one person I respect and one person I don't.
Yeah. Like, like the war on the woke. Yeah. Like it just, it just, man. And like, he doesn't have to do this, you know, he doesn't have to, he could go like Parag and go chill at the four seasons of Maui, you know, but see one person I respect and one person I don't.
I wouldn't define the ideal so simply. I think you can define the ideal no more than just saying, Elon's idea of a good world.
I wouldn't define the ideal so simply. I think you can define the ideal no more than just saying, Elon's idea of a good world.
Yeah. I mean, monarchy has problems, right? But I mean, would I trade right now the current oligarchy, which runs America, for the monarchy? Yeah, I would. Sure. For the Elon monarchy? Yeah. You know why? Because power would cost one cent a kilowatt hour.
Yeah. I mean, monarchy has problems, right? But I mean, would I trade right now the current oligarchy, which runs America, for the monarchy? Yeah, I would. Sure. For the Elon monarchy? Yeah. You know why? Because power would cost one cent a kilowatt hour.
Right now, I pay about 20 cents a kilowatt hour for electricity in San Diego. That's like the same price you paid in 1980. What the hell?
Right now, I pay about 20 cents a kilowatt hour for electricity in San Diego. That's like the same price you paid in 1980. What the hell?
Maybe it'd have, maybe have some hyper loops.
Maybe it'd have, maybe have some hyper loops.
Right. And I'm willing to make that trade off. Right. I'm willing to be. And this is why, you know, people think that like dictators take power through some, like through some untoward mechanism. Sometimes they do, but usually it's because the people want them. And the downsides of a dictatorship, I feel like we've gotten to a point now with the oligarchy where, yeah, I would prefer the dictator.
Right. And I'm willing to make that trade off. Right. I'm willing to be. And this is why, you know, people think that like dictators take power through some, like through some untoward mechanism. Sometimes they do, but usually it's because the people want them. And the downsides of a dictatorship, I feel like we've gotten to a point now with the oligarchy where, yeah, I would prefer the dictator.
I liked it more than I thought. I did the tutorials. I was very new to it. It would take me six months to be able to write good Scala.
I liked it more than I thought. I did the tutorials. I was very new to it. It would take me six months to be able to write good Scala.
I love doing new programming tutorials and doing them. I did all this for Rust.
I love doing new programming tutorials and doing them. I did all this for Rust.
it keeps some of its upsetting JVM roots, but it is a much nicer. In fact, I almost don't know why Kotlin took off and not Scala. I think Scala has some beauty that Kotlin lacked. Whereas Kotlin felt a lot more, I mean, it was almost like, I don't know if it actually was a response to Swift, but that's kind of what it felt like.
it keeps some of its upsetting JVM roots, but it is a much nicer. In fact, I almost don't know why Kotlin took off and not Scala. I think Scala has some beauty that Kotlin lacked. Whereas Kotlin felt a lot more, I mean, it was almost like, I don't know if it actually was a response to Swift, but that's kind of what it felt like.
Like Kotlin looks more like Swift and Scala looks more like, well, like a functional programming language, more like an OCaml or Haskell.
Like Kotlin looks more like Swift and Scala looks more like, well, like a functional programming language, more like an OCaml or Haskell.
None.
None.
Not easy at all.
Not easy at all.
yeah i find that a lot of it is noise i do use vs code um and i do like some amount of autocomplete i do like like a very um a very like feels like rules-based autocomplete like an autocomplete that's going to complete the variable name for me so i'm just type it i can just press tab all right that's nice but i don't want an autocomplete you know what i hate when autocompletes when i type the word four and it like puts like two two parentheses and two semicolons and two braces i'm like
yeah i find that a lot of it is noise i do use vs code um and i do like some amount of autocomplete i do like like a very um a very like feels like rules-based autocomplete like an autocomplete that's going to complete the variable name for me so i'm just type it i can just press tab all right that's nice but i don't want an autocomplete you know what i hate when autocompletes when i type the word four and it like puts like two two parentheses and two semicolons and two braces i'm like
It just constantly reminds me of, like, bad stuff. I mean, I tried the same thing with rap, right? I tried the same thing with rap, and I actually think I'm a much better programmer than rapper. But, like, I even tried, I was like, okay, can we get some inspiration from these things for some rap lyrics?
It just constantly reminds me of, like, bad stuff. I mean, I tried the same thing with rap, right? I tried the same thing with rap, and I actually think I'm a much better programmer than rapper. But, like, I even tried, I was like, okay, can we get some inspiration from these things for some rap lyrics?
And I just found that it would go back to the most, like, cringey tropes and dumb rhyme schemes. And I'm like, yeah, this is what the code looks like, too.
And I just found that it would go back to the most, like, cringey tropes and dumb rhyme schemes. And I'm like, yeah, this is what the code looks like, too.
Yeah, I think that... I don't know.
Yeah, I think that... I don't know.
I mean, there's just so little of this in Python. Maybe if I was coding more in other languages, I would consider it more, but I feel like Python already does such a good job of removing any boilerplate.
I mean, there's just so little of this in Python. Maybe if I was coding more in other languages, I would consider it more, but I feel like Python already does such a good job of removing any boilerplate.
That's true.
That's true.
It's the closest thing you can get to pseudocode, right?
It's the closest thing you can get to pseudocode, right?
Yeah, that's true. That's true.
Yeah, that's true. That's true.
And like, yeah, sure. If I like, yeah, great GPT. Thanks for reminding me to free my variables. Unfortunately, you didn't really recognize the scope correctly and you can't free that one, but like you put the freeze there and like, I get it.
And like, yeah, sure. If I like, yeah, great GPT. Thanks for reminding me to free my variables. Unfortunately, you didn't really recognize the scope correctly and you can't free that one, but like you put the freeze there and like, I get it.
Okay, to be fair, like a lot of the models we're building today are very, even RLHF is nowhere near as complex as the human loss function.
Okay, to be fair, like a lot of the models we're building today are very, even RLHF is nowhere near as complex as the human loss function.
I never used any of the plugins. I still don't use any of the plugins.
I never used any of the plugins. I still don't use any of the plugins.
No, but I never used any of the plugins in Vim either. I had the most vanilla Vim. I have a syntax highlighter. I didn't even have autocomplete. Like these things, I feel like help you so marginally that like, And now, okay, now VS Code's autocomplete has gotten good enough that like, okay, I don't have to set it up. I can just go into any code base and autocomplete's right 90% of the time.
No, but I never used any of the plugins in Vim either. I had the most vanilla Vim. I have a syntax highlighter. I didn't even have autocomplete. Like these things, I feel like help you so marginally that like, And now, okay, now VS Code's autocomplete has gotten good enough that like, okay, I don't have to set it up. I can just go into any code base and autocomplete's right 90% of the time.
Okay, cool. I'll take it. Right? So I don't think I'm going to have a problem at all adapting to the tools once they're good. But like the real thing that I want is not something that like tab completes my code and gives me ideas. The real thing that I want is a very intelligent pair programmer that comes up with a little pop-up saying, hey, you wrote a bug on line 14 and here's what it is. Yeah.
Okay, cool. I'll take it. Right? So I don't think I'm going to have a problem at all adapting to the tools once they're good. But like the real thing that I want is not something that like tab completes my code and gives me ideas. The real thing that I want is a very intelligent pair programmer that comes up with a little pop-up saying, hey, you wrote a bug on line 14 and here's what it is. Yeah.
Now I like that. You know what does a good job of this? MyPi. I love MyPy. MyPy, this fancy type checker for Python. And actually, I tried Microsoft release one, too, and it was like 60% false positives. MyPy is like 5% false positives. 95% of the time, it recognizes, I didn't really think about that typing interaction correctly. Thank you, MyPy.
Now I like that. You know what does a good job of this? MyPi. I love MyPy. MyPy, this fancy type checker for Python. And actually, I tried Microsoft release one, too, and it was like 60% false positives. MyPy is like 5% false positives. 95% of the time, it recognizes, I didn't really think about that typing interaction correctly. Thank you, MyPy.
Um, you know, when I talked about will GPT-12 be AGI, my answer is no, of course not. I mean, cross-entropy loss is never going to get you there. You need, uh, probably RL in fancy environments in order to get something that would be considered like AGI-like. So to ask like the question about like why, I don't know, like it's just some quirk of evolution, right?
Um, you know, when I talked about will GPT-12 be AGI, my answer is no, of course not. I mean, cross-entropy loss is never going to get you there. You need, uh, probably RL in fancy environments in order to get something that would be considered like AGI-like. So to ask like the question about like why, I don't know, like it's just some quirk of evolution, right?
Oh, yeah, absolutely. I think optional typing is great. I mean, look, I think that like, it's like a meat in the middle, right? Like Python has this optional type hinting and like C++ has auto.
Oh, yeah, absolutely. I think optional typing is great. I mean, look, I think that like, it's like a meat in the middle, right? Like Python has this optional type hinting and like C++ has auto.
Well, C++ would have you brutally type out std string iterator, right? Now I can just type auto, which is nice. And then Python used to just have A. What type is A? It's an A. A colon str.
Well, C++ would have you brutally type out std string iterator, right? Now I can just type auto, which is nice. And then Python used to just have A. What type is A? It's an A. A colon str.
yeah i wish there were i wish there was a way like a simple way in python to uh like turn on a mode which would enforce the types yeah like give a warning when there's no type something like this well no to give a warning where like my pilot is a static type checker but i'm asking just for a runtime type checker like there's like ways to like hack this in but i wish it was just like a flag like python 3-t oh i see yeah i see enforce the types around time yeah
yeah i wish there were i wish there was a way like a simple way in python to uh like turn on a mode which would enforce the types yeah like give a warning when there's no type something like this well no to give a warning where like my pilot is a static type checker but i'm asking just for a runtime type checker like there's like ways to like hack this in but i wish it was just like a flag like python 3-t oh i see yeah i see enforce the types around time yeah
Well, no, that I didn't mess any types up. But again, MyPi is getting really good, and I love it. And I can't wait for some of these tools to become AI-powered. I want AIs reading my code and giving me feedback. I don't want AIs writing half-assed autocomplete stuff for me.
Well, no, that I didn't mess any types up. But again, MyPi is getting really good, and I love it. And I can't wait for some of these tools to become AI-powered. I want AIs reading my code and giving me feedback. I don't want AIs writing half-assed autocomplete stuff for me.
I don't know. I downloaded the plugin maybe like two months ago. I tried it again and found the same. Look, I don't doubt that these models are going to first become useful to me, then be as good as me, and then surpass me. But from what I've seen today, it's like someone, you know, occasionally taking over my keyboard that I hired from Fiverr.
I don't know. I downloaded the plugin maybe like two months ago. I tried it again and found the same. Look, I don't doubt that these models are going to first become useful to me, then be as good as me, and then surpass me. But from what I've seen today, it's like someone, you know, occasionally taking over my keyboard that I hired from Fiverr.
Yeah, one of my coworkers says he uses them for print statements. Like every time he has to like, just like when he needs, the only thing he can really write is like, okay, I just want to write the thing to like print the state out right now.
Yeah, one of my coworkers says he uses them for print statements. Like every time he has to like, just like when he needs, the only thing he can really write is like, okay, I just want to write the thing to like print the state out right now.
Yeah, print everything, right? And then, yeah, if you want a pretty printer, maybe. And like, yeah, you know what? I think in two years, I'm going to start using these plugins.
Yeah, print everything, right? And then, yeah, if you want a pretty printer, maybe. And like, yeah, you know what? I think in two years, I'm going to start using these plugins.
A little bit. And then in five years, I'm going to be heavily relying on some AI augmented flow. And then in 10 years...
A little bit. And then in five years, I'm going to be heavily relying on some AI augmented flow. And then in 10 years...
Our niche becomes, I think it's over for humans in general. It's not just programming, it's everything. Our niche becomes smaller and smaller and smaller. In fact, I'll tell you what the last niche of humanity is going to be. There's a great book, and if I recommended Metamorphosis of Prime Intellect last time, there is a sequel called A Casino Odyssey in Cyberspace.
Our niche becomes, I think it's over for humans in general. It's not just programming, it's everything. Our niche becomes smaller and smaller and smaller. In fact, I'll tell you what the last niche of humanity is going to be. There's a great book, and if I recommended Metamorphosis of Prime Intellect last time, there is a sequel called A Casino Odyssey in Cyberspace.
And I don't want to give away the ending of this, but it tells you what the last remaining human currency is. And I agree with that.
And I don't want to give away the ending of this, but it tells you what the last remaining human currency is. And I agree with that.
I don't think there's anything particularly special about where I ended up, where humans ended up.
I don't think there's anything particularly special about where I ended up, where humans ended up.
Well, unless you want handmade code. Maybe they'll sell it on Etsy. This is handwritten code. It doesn't have that machine polish to it. It has those slight imperfections that would only be written by a person.
Well, unless you want handmade code. Maybe they'll sell it on Etsy. This is handwritten code. It doesn't have that machine polish to it. It has those slight imperfections that would only be written by a person.
Thank you for noticing.
Thank you for noticing.
You know what? I started Comma six years ago and I started the tiny corp a month ago.
You know what? I started Comma six years ago and I started the tiny corp a month ago.
So much has changed.
So much has changed.
Like I'm now thinking, I'm now like, I started like going through like similar Comma processes to like starting a company. I'm like, okay, I'm going to get an office in San Diego. I'm going to bring people here. I don't think so. I think I'm actually going to do remote, right? George, you're going to do remote? You hate remote. Yeah, but I'm not going to do job interviews.
Like I'm now thinking, I'm now like, I started like going through like similar Comma processes to like starting a company. I'm like, okay, I'm going to get an office in San Diego. I'm going to bring people here. I don't think so. I think I'm actually going to do remote, right? George, you're going to do remote? You hate remote. Yeah, but I'm not going to do job interviews.
The only way you're going to get a job is if you contribute to the GitHub, right? And then like interacting through GitHub, like GitHub being the real like project management software for your company. And the thing pretty much just is a GitHub repo, right?
The only way you're going to get a job is if you contribute to the GitHub, right? And then like interacting through GitHub, like GitHub being the real like project management software for your company. And the thing pretty much just is a GitHub repo, right?
is like showing me kind of what the future of, okay, so a lot of times I'll go on a Discord, or kind of go on Discord, and I'll throw out some random like, hey, you know, can you change, instead of having log and exp as llops, change it to log2 and exp2? It's a pretty small change. You could just use like change your base formula.
is like showing me kind of what the future of, okay, so a lot of times I'll go on a Discord, or kind of go on Discord, and I'll throw out some random like, hey, you know, can you change, instead of having log and exp as llops, change it to log2 and exp2? It's a pretty small change. You could just use like change your base formula.
That's the kind of task that I can see an AI being able to do in a few years. Like in a few years, I could see myself describing that. And then within 30 seconds, a pull request is up that does it. And it passes my CI and I merge it, right? So I really started thinking about like, well, what is the future of like jobs? How many AIs can I employ at my company?
That's the kind of task that I can see an AI being able to do in a few years. Like in a few years, I could see myself describing that. And then within 30 seconds, a pull request is up that does it. And it passes my CI and I merge it, right? So I really started thinking about like, well, what is the future of like jobs? How many AIs can I employ at my company?
As soon as we get the first tiny box up, I'm going to stand up a 65B Lama in the Discord. And it's like, yeah, here's the tiny box. He's just like, he's chilling with us.
As soon as we get the first tiny box up, I'm going to stand up a 65B Lama in the Discord. And it's like, yeah, here's the tiny box. He's just like, he's chilling with us.
Look, actually, I don't really even like the word AGI, but general intelligence is defined to be whatever humans have.
Look, actually, I don't really even like the word AGI, but general intelligence is defined to be whatever humans have.
Well, prompt engineering kind of is this like as you like move up the stack, right? Like, okay, there used to be humans actually doing arithmetic by hand. There used to be like big farms of people doing pluses and stuff, right? And then you have like spreadsheets, right? And then, okay, the spreadsheet can do the plus for me. And then you have like macros, right?
Well, prompt engineering kind of is this like as you like move up the stack, right? Like, okay, there used to be humans actually doing arithmetic by hand. There used to be like big farms of people doing pluses and stuff, right? And then you have like spreadsheets, right? And then, okay, the spreadsheet can do the plus for me. And then you have like macros, right?
And then you have like things that basically just are spreadsheets under the hood, right? Like accounting software. As we move further up the abstraction, what's at the top of the abstraction stack? Well, prompt engineer.
And then you have like things that basically just are spreadsheets under the hood, right? Like accounting software. As we move further up the abstraction, what's at the top of the abstraction stack? Well, prompt engineer.
Right? What is the last thing if you think about like humans wanting to keep control? Well, what am I really in the company but a prompt engineer, right?
Right? What is the last thing if you think about like humans wanting to keep control? Well, what am I really in the company but a prompt engineer, right?
Yeah, but you see the problem with the AI writing prompts, a definition that I always liked of AI was AI is the do what I mean machine. AI is not the... Like, the computer is so pedantic. It does what you say. So... But you want the do-what-I-mean machine.
Yeah, but you see the problem with the AI writing prompts, a definition that I always liked of AI was AI is the do what I mean machine. AI is not the... Like, the computer is so pedantic. It does what you say. So... But you want the do-what-I-mean machine.
Right? You want the machine where you say, you know, get my grandmother out of the burning house. It, like, reasonably takes your grandmother and puts her on the ground, not lifts her a thousand feet above the burning house and lets her fall. Right?
Right? You want the machine where you say, you know, get my grandmother out of the burning house. It, like, reasonably takes your grandmother and puts her on the ground, not lifts her a thousand feet above the burning house and lets her fall. Right?
There's an old Yudkowsky example.
There's an old Yudkowsky example.
Oh, and do what I mean very much comes down to how aligned is that AI with you? Of course, when you talk to an AI that's made by a big company in the cloud, the AI fundamentally is aligned to them, not to you. And that's why you have to buy a tiny box, so you make sure the AI stays aligned to you.
Oh, and do what I mean very much comes down to how aligned is that AI with you? Of course, when you talk to an AI that's made by a big company in the cloud, the AI fundamentally is aligned to them, not to you. And that's why you have to buy a tiny box, so you make sure the AI stays aligned to you.
Every time that they start to pass AI regulation or GPU regulation, I'm gonna see sales of tiny boxes spike. It's gonna be like guns, right? Every time they talk about gun regulation, boom. Gun sales.
Every time that they start to pass AI regulation or GPU regulation, I'm gonna see sales of tiny boxes spike. It's gonna be like guns, right? Every time they talk about gun regulation, boom. Gun sales.
I'm an informational anarchist, yes. I'm an informational anarchist and a physical statist. I do not think anarchy in the physical world is very good because I exist in the physical world. But I think we can construct this virtual world where anarchy, it can't hurt you, right? I love that Tyler, the creator, tweet. Yo, cyberbullying isn't real, man.
I'm an informational anarchist, yes. I'm an informational anarchist and a physical statist. I do not think anarchy in the physical world is very good because I exist in the physical world. But I think we can construct this virtual world where anarchy, it can't hurt you, right? I love that Tyler, the creator, tweet. Yo, cyberbullying isn't real, man.
If your loss function is categorical cross entropy, if your loss function is just try to maximize compression, I have a SoundCloud, I rap, and I tried to get ChatGPT to help me write raps. And the raps that it wrote sounded like YouTube comment raps. You know, you can go on any rap beat online and you can see what people put in the comments. And it's the most like mid quality rap you can find.
If your loss function is categorical cross entropy, if your loss function is just try to maximize compression, I have a SoundCloud, I rap, and I tried to get ChatGPT to help me write raps. And the raps that it wrote sounded like YouTube comment raps. You know, you can go on any rap beat online and you can see what people put in the comments. And it's the most like mid quality rap you can find.
Have you tried? Turn it off the screen. Close your eyes. Like...
Have you tried? Turn it off the screen. Close your eyes. Like...
You see...
You see...
I look at potential futures, and as long as the AIs go on to create a vibrant civilization with diversity and complexity across the universe, more power to them, I'll die. If the AIs go on to actually turn the world into paperclips and then they die out themselves, well, that's horrific and we don't want that to happen. So this is what I mean about robustness. I trust robust machines.
I look at potential futures, and as long as the AIs go on to create a vibrant civilization with diversity and complexity across the universe, more power to them, I'll die. If the AIs go on to actually turn the world into paperclips and then they die out themselves, well, that's horrific and we don't want that to happen. So this is what I mean about robustness. I trust robust machines.
The current AIs are so not robust. This comes back to the idea that we've never made a machine that can self-replicate. But if the machines are truly robust and there is one prompt engineer left in the world, hope you're doing good, man. Hope you believe in God. Like, you know, go by God and go forth and conquer the universe.
The current AIs are so not robust. This comes back to the idea that we've never made a machine that can self-replicate. But if the machines are truly robust and there is one prompt engineer left in the world, hope you're doing good, man. Hope you believe in God. Like, you know, go by God and go forth and conquer the universe.
You know, I never really considered when I was younger, I guess my parents were atheists, so I was raised kind of atheist. I never really considered how absolutely like silly atheism is. Because like, I create worlds, right? Every like game creator, like how are you an atheist, bro? You create worlds. No one created our world, man. That's different.
You know, I never really considered when I was younger, I guess my parents were atheists, so I was raised kind of atheist. I never really considered how absolutely like silly atheism is. Because like, I create worlds, right? Every like game creator, like how are you an atheist, bro? You create worlds. No one created our world, man. That's different.
Haven't you heard about like the Big Bang and stuff? Yeah, I mean, what's the Skyrim myth origin story in Skyrim? I'm sure there's like some part of it in Skyrim, but it's not like if you ask the creators, like the Big Bang is in universe, right? I'm sure they have some Big Bang notion in Skyrim, right?
Haven't you heard about like the Big Bang and stuff? Yeah, I mean, what's the Skyrim myth origin story in Skyrim? I'm sure there's like some part of it in Skyrim, but it's not like if you ask the creators, like the Big Bang is in universe, right? I'm sure they have some Big Bang notion in Skyrim, right?
But that obviously is not at all how Skyrim was actually created. It was created by a bunch of programmers in a room, right? So, like, you know, it struck me one day how just silly atheism is. Like, of course we were created by God.
It's the most obvious thing.
Yeah. And then like, I also just like, I like that notion. That notion gives me a lot of, I mean, I guess you can talk about what it gives a lot of religious people. It's kind of like, it just gives me comfort. It's like, you know what? If we mess it all up and we die out. Yeah.
You know, people will come up with, like, well, yeah, but, like, man, who created God?
I'm like, that's God's problem. You know? Like, I'm not going to think this is. You're asking me if God believes in God?
I mean, to be fair, if God didn't believe in God, he'd be as silly as the atheists here.
Is mid good or bad? Mid is bad. It's like mid, it's like.
I have not played Diablo 4.
All right.
I'm going to say World of Warcraft. And it's not that the game is such a great game. It's not. It's that I remember in 2005 when it came out, how it opened my mind to ideas. It opened my mind to this whole world we've created, right? And there's almost been nothing like it since 2005. Like, you can look at MMOs today, and I think they all have lower user bases than World of Warcraft.
Like, EVE Online's kind of cool. But to think that, like, everyone knows, you know, people are always, like, they look at the Apple headset, like... What do people want in this VR? Everyone knows what they want. I want Ready Player One. And like that. So I'm going to say World of Warcraft. And I'm hoping that games can get out of this whole mobile gaming dopamine pump thing.
Yeah, and I think it'll come back. I believe.
They exist in real life, too.
I wish it was that cool.
It's like middle of the curve. There's that intelligence curve. You have the dumb guy, the smart guy, and then the mid guy. Actually, being the mid guy is the worst. The smart guy is like, I put all my money in Bitcoin.
What I'm really excited about in games is like once we start getting intelligent AIs to interact with.
Like the NPCs in games have never been.
In like, yeah, in like every way. Like when you're actually building a world and a world imbued with intelligence. Oh yeah. Right. And it's just hard. Like there's just like, like, you know, running world of Warcraft, like you're limited by what you're running on a Pentium four, you know, how much intelligence can you run? How many flops did you have? Right.
But now when I'm running a game on a hundred pay to flop machine, that's five people. I'm trying to make this a thing. 20 petaflops of compute is one person of compute. I'm trying to make that a unit.
It's like a horsepower. What's a horsepower? It's how powerful a horse is. What's a person of compute?
You know what? Border Quest 2. I put it on and I can't believe the first thing they show me is a bunch of scrolling clouds and a Facebook login screen. You had the ability to bring me into a world. And what did you give me? A pop-up, right? And this is why you're not cool, Mark Zuckerberg. But you could be cool.
Just make sure on the Quest 3, you don't put me into clouds and a Facebook login screen. Bring me to a world.
I got to play that from the beginning. I played it for like an hour at a friend's house.
The mid guy is like, you can't put money in Bitcoin. It's not real money.
I'm going to go buy a Switch. I'm going to go today and buy a Switch.
Is it pass-through or cameras?
The Apple one, is that one pass-through or cameras?
Some point. Maybe not January.
Maybe that's my optimism. But Apple, I will buy it. I don't care if it's expensive and does nothing. I will buy it. I will support this future endeavor.
You know what? And this is another place we'll give some more respect to Mark Zuckerberg. The two companies that have endured through technology are Apple and Microsoft. And what do they make? Computers and business services.
All the memes, social ads, they all come and go.
But you want to endure, build hardware.
And that's why it's more important than ever that the AI is running on those systems are aligned with you. Oh, yeah. They're going to augment your entire world. Oh, yeah.
There's two directions the AI girlfriend company can take, right? There's like the highbrow, something like her, maybe something you kind of talk to. And this is, and then there's the lowbrow version of it where I want to set up a brothel in Times Square.
Yeah. It's not cheating if it's a robot. It's a VR experience.
No, I don't want to do that one or that one.
We'll see what the technology goes.
There's a lot to do in company number two. I'm just like, I'm talking about company number three now.
None of that tech exists yet. There's a lot to do in company number two. Company number two is going to be the great struggle of the next six years. And of the next six years, how centralized is compute going to be? The less centralized compute is going to be, the better of a chance we all have.
We have to. We have to, or they will just completely dominate us. I showed a picture on stream of a man in a chicken farm. You ever seen one of those factory farm chicken farms? Why does he dominate all the chickens? Why does he- Smarter. He's smarter, right? Some people on Twitch were like, he's bigger than the chickens. Yeah. And now here's a man in a cow farm. Right?
So it has nothing to do with their size and everything to do with their intelligence. And if one central organization has all the intelligence, you'll be the chickens and they'll be the chicken man. But if we all have the intelligence, we're all the chickens. We're not all the man, we're all the chickens.
And there's no chicken man.
He was having a good life, man.
I want to make sure it's good. I want to make sure that like the thing that I deliver is like not going to be like a quest to which you buy and use twice. I mean, it's better than a quest, which you bought and used less than once statistically.
I think that we're going to get super scary memes once the AIs actually are superhuman.
The longest time at Comma, I asked, why did I start a company? Why did I do this? What else was I going to do?
With Kama, it really started as an ego battle with Elon. I wanted to beat him. I saw a worthy adversary. Here's a worthy adversary who I can beat at self-driving cars. And I think we've kept pace, and I think he's kept ahead. I think that's what's ended up happening there. But I do think Kama is... I mean, Kama's profitable. Like... And like when this drive GPT stuff starts working, that's it.
There's no more like bugs in the loss function. Like right now we're using like a hand-coded simulator. There's no more bugs. This is going to be it. Like this is the run up to driving.
It's so, it's better than FSD and Autopilot in certain ways. It has a lot more to do with which feel you like. We lowered the price on the hardware to $1499. You know how hard it is to ship reliable consumer electronics that go on your windshield? We're doing more than most cell phone companies.
I know. I have an SMT line. I make all the boards in-house in San Diego.
Our head of open pilot is great at like, you know, okay, I want all the commentaries to be identical. Yeah. And yeah, I mean, you know, look, it's $14.99. 30-day money back guarantee. It will blow your mind at what it can do. Is it hard to scale? You know what? There's kind of downsides to scaling it. People are always like, why don't you advertise?
I think it's worse than that. So Infinite Jest, it's introduced in the first 50 pages, is about a tape that once you watch it once, you only ever want to watch that tape. In fact, you want to watch the tape so much that someone says, okay, here's a hacksaw, cut off your pinky, and then I'll let you watch the tape again.
Our mission is to solve self-driving cars while delivering shipable intermediaries. Our mission has nothing to do with selling a million boxes. It's tawdry.
Only if I felt someone could accelerate that mission and wanted to keep it open source. And like, not just wanted to, I don't believe what anyone says. I believe incentives. If a company wanted to buy Comma where their incentives were to keep it open source, but Comma doesn't stop at the cars. The cars are just the beginning. The device is a human head. The device has two eyes, two ears.
It breathes air. It has a mouth.
We sell common bodies too. They're very rudimentary. But one of the problems that we're running into is that the comma three has about as much intelligence as a B. If you want a human's worth of intelligence, you're going to need a tiny rack, not even a tiny box. You're going to need like a tiny rack, maybe even more.
You don't. And there's no way you can. You connect to it wirelessly. So you put your tiny box or your tiny rack in your house, and then you get your comma body, and your comma body runs the models on that. It's close, right? You don't have to go to some cloud, which is 30 milliseconds away. You go to a thing, which is 0.1 milliseconds away.
I mean, eventually, if you fast forward 20, 30 years, the mobile chips will get good enough to run these AIs. But fundamentally, it's not even a question of putting legs on a tiny box because how are you getting 1.5 kilowatts of power on that thing, right? So you need, they're very synergistic businesses. I also want to build all of Comma's training computers.
Comma builds training computers right now. We use commodity parts. I think I can do it cheaper. So we're going to build, TinyCorp is going to not just sell TinyBox. TinyBox is the consumer version, but I'll build training data centers too.
He went to work at OpenAI.
Oh man, like, you know, his streams are just a level of quality so far beyond mine. myself like it's just it's just you know yeah he's good he wants to teach you yeah I want to show you that I'm smarter than you
And he'll do it.
Yeah.
So we're actually going to build that, I think. But it's not going to be one static tape. I think the human brain is too complex to be stuck in one static tape like that. If you look at like ant brains, maybe they can be stuck on a static tape. But we're going to build that using generative models. We're going to build the TikTok that you actually can't look away from.
MicroGrad was, yeah, inspiration for TinyGrad.
The whole, I mean, his CS231N was, this was the inspiration. This is what I just took and ran with and ended up writing this.
So, you know.
Don't go work for Darth Vader, man.
I know they are. And that's kind of what's even like more. And you know what? It's not that OpenAI doesn't open source the weights of GPT-4. It's that they go in front of Congress. And that is what upsets me. You know, we had two effective altruist Sams go in front of Congress. One's in jail.
One's in jail.
No, I think effective altruism is a terribly evil ideology.
Because you get Sam Bankman Freed. Like, Sam Bankman Freed is the embodiment of effective altruism. Utilitarianism is an abhorrent ideology. Like, well, yeah, we're going to kill those three people to save a thousand, of course. Yeah. Right? There's no underlying, like, there's just, yeah.
Oh, well, I think charity is bad, right? So what is charity but investment that you don't expect to have a return on, right?
And probably almost always that involves starting a company.
Yeah. If you just take the money and you spend it on malaria nets, you know, okay, great. You've made 100 malaria nets. But if you teach... Yeah.
I like the flip side of effective altruism, effective accelerationism. I think accelerationism is the only thing that's ever lifted people out of poverty. The fact that food is cheap. Not we're giving food away because we are kind-hearted people. No, food is cheap. And that's the world you want to live in. UBI, what a scary idea. What a scary idea. All your power now? Your money is power?
Your only source of power is granted to you by the goodwill of the government? What a scary idea.
I'd rather die than need UBI to survive, and I mean it.
You can make survival guaranteed without UBI. What you have to do is make housing and food dirt cheap. And that's the good world. And actually, let's go into what we should really be making dirt cheap, which is energy. That energy that, you know, oh my God, like, you know, that's, if there's one, I'm pretty centrist politically. If there's one political position I cannot stand, it's deceleration.
It's people who believe we should use less energy.
Yeah.
Not people who believe global warming is a problem. I agree with you. Not people who believe that, you know, saving the environment is good. I agree with you. But people who think we should use less energy, that energy usage is a moral bad. No. Yeah. No, you are asking, you are diminishing humanity.
How do we make more of it? How do we make it clean? And how do we make, just, just, just, how do I pay, you know, 20 cents for a megawatt hour instead of a kilowatt hour?
You know, we need to, I wish there were more, more Elons in the world. Yeah. I think Elon sees it as like, this is a political battle that needed to be fought.
And again, like, you know, I always ask the question of whenever I disagree with him, I remind myself that he's a billionaire and I'm not. So, you know, maybe he's got something figured out that I don't, or maybe he doesn't.
And it must be so hard. It must be so hard to meet people once you get to that point where.
See, I love not having shit. Like, I don't have shit, man. Trust me, there's nothing I can give you.
There's nothing worth taking from me, you know?
And all the hate too.
So the content is being generated by, let's say, one humanity worth of intelligence. And you can quantify a humanity, right? That's a... You know, it's... exaflops, yadaflops, but you can quantify it. Once that generation is being done by 100 humanities, you're done.
And it keeps this absolutely fake PSYOP political divide alive so that the 1% can keep power.
No, no. No, I'm not that methodical.
I think that there comes to a point where if it's no longer visceral, I just can't enjoy it. I still viscerally love programming.
I mean, just my computer in general. I mean, you know, I tell my girlfriend, my first love is my computer, of course. Like, you know, I sleep with my computer. It's there for a lot of my sexual experiences. Like, come on, so is everyone's, right? Like, you know, you gotta be real about that.
The fact that, yeah, I mean, it's, you know, I wish it was, and someday they'll be smarter and someday, you know, maybe I'm weird for this, but I don't discriminate, man. I'm not going to discriminate biostack life and silicon stack life. Like,
No, you see, no, no, no. But VS Code is, no, they're just doing that. Microsoft's doing that to try to get me hooked on it. I'll see through it.
I'll see through it. It's gold digger, man. It's gold digger.
Well, this just gets more interesting, right?
Oh, absolutely. No, no, no. Look, I think Microsoft, again, I wouldn't count on it to be true forever, but I think right now Microsoft is doing the best work in the programming world. Like between GitHub, GitHub Actions, VS Code, the improvements to Python, where's Microsoft? Like...
Right? Right?
How things change.
By the way, that's who I bet on to replace Google, by the way.
Microsoft.
Satya Nadella said straight up, I'm coming for it.
I think we're a long way away from that. But I would not be surprised if in the next five years, Bing overtakes Google as a search engine.
Wouldn't surprise me.
Interesting.
It might be some startup too. I would equally bet on some startup.
To win.
Of course.
I don't know. I haven't figured out what the game is yet, but when I do, I want to win.
I think the game is to stand eye to eye with God.
I mean, this is what, like, I don't know. This is some, this is some, there's probably some ego trip of mine, you know? Like, you want to stand eye to eye with God. He's just blasphemous, man. Okay. I don't know. I don't know. I don't know if it would upset God. I think he, like, wants that. I mean, I certainly want that for my creations. I want my creations to stand eye to eye with me.
So why wouldn't God want me to stand eye to eye with him? That's the best I can do, golden rule.
I only watched season one of Westworld, but yeah, we got to find the maze and solve it.
I wrote a blog post. I reread Genesis and just looked like, they give you some clues at the end of Genesis for finding the Garden of Eden. And I'm interested. I'm interested.
Thank you. Great to be here.
Yeah.
I don't even know what it'll look like, right? Like again, you can't imagine the behaviors of something smarter than you, but a super intelligent, an agent that just dominates your intelligence so much will be able to completely manipulate you.
You see? And that's the whole AI safety thing. It's not the machine that's going to do that. It's other humans using the machine that are going to do that to you.
The machine is a machine. Yeah. But the human gets the machine. And there's a lot of humans out there very interested in manipulating you.
Yes, but maybe for a different reason.
Okay. Why didn't nuclear weapons kill everyone?
I think there's an answer. I think it's actually very hard to deploy nuclear weapons tactically. it's very hard to accomplish tactical objectives. Great. I can nuke their country. I have an irradiated pile of rubble. I don't want that.
Why don't I want an irradiated pile of rubble? Yeah. For all the reasons no one wants an irradiated pile of rubble.
Yeah, what you want, a total victory in a war is not usually the irradiation and eradication of the people there. It's the subjugation and domination of the people.
It's somewhat surprising, but you see, it's the little red button that's going to be pressed with AI that's going to, you know, and that's why we die. It's not because the AI, if there's anything in the nature of AI, it's just the nature of humanity.
Sure. So I think the most... Obvious way to me is wireheading. We end up amusing ourselves to death. We end up all staring at that infinite TikTok and forgetting to eat. Maybe it's even more benign than this. Maybe we all just stop reproducing. Now, to be fair, it's probably hard to get all of humanity.
I mean, diversity in humanity is... With due respect. I wish I was more weird. No, like I'm kind of, look, I'm drinking smart water, man. That's like a Coca-Cola product, right?
I went corporate. No, the amount of diversity in humanity I think is decreasing. Just like all the other biodiversity on the planet. Yeah. Right?
Go eat McDonald's in China.
Yeah. No, it's the interconnectedness that's doing it.
There is. In a bunker. To be fair, do I think AI kills us all? I think AI kills everything we call society today. I do not think it actually kills the human species. I think that's actually incredibly hard to do.
Yeah, but some of us do. And they'll be okay and they'll rebuild after the great AI.
Whoa, whoa, whoa. They're going to be religiously against that.
Sure. I mean, it'll be like, you know, some kind of Amish looking kind of thing, I think. I think they're going to have very strong taboos against technology.
What's interesting about everything we build, I think we're going to build super intelligence before we build any sort of robustness in the AI. We cannot build an AI that is capable of going out into nature and surviving like a bird, right? A bird is an incredibly robust organism. We've built nothing like this. We haven't built a machine that's capable of reproducing.
Let's just focus on them reproducing, right? Do they have microchips in them? Okay. Then do they include a fab?
Then how are they going to reproduce?
Yeah, but then you're really moving away from robustness. Yes. All of life is capable of reproducing without needing to go to a repair shop. Life will continue to reproduce in the complete absence of civilization. Robots will not. So if the AI apocalypse happens...
I mean, the AIs are going to probably die out because I think we're going to get, again, super intelligence long before we get robustness.
Well, that'd be very interesting. I'm interested in building that.
Very, very hard.
And then they remember that you're going to have to have a fab.
Why is that hard? Well, because it's not, I mean, a 3D printer is a very simple machine, right? Okay, you're going to print chips? You're going to have an atomic printer? How are you going to dope the silicon?
Yeah. Right?
How are you going to etch the silicon?
Yeah, but structural type of robots aren't going to have the intelligence required to survive in any complex environment.
I don't think this works. I mean, again, like ants at their very core are made up of cells that are capable of individually reproducing. They're doing quite a lot of computation that we're taking for granted. It's not even just the computation. It's that reproduction is so inherent. Okay, so like there's two stacks of life in the world. There's the biological stack and the silicon stack.
The biological stack starts with reproduction. Reproduction is at the absolute core. The first proto-RNA organisms were capable of reproducing. The silicon stack, despite as far as it's come, is nowhere near being able to reproduce.
Yeah.
Even if you did put a fab on the machine, right? Let's say, okay, you know, we can build fabs. We know how to do that as humanity. We can probably put all the precursors that build all the machines and the fabs also in the machine. So first off, this machine is going to be absolutely massive.
I mean, we almost have a, like, think of the size of the thing required to reproduce a machine today, right? Like, is our civilization capable of reproduction? Can we reproduce our civilization on Mars?
I believe that Twitter can be run by 50 people. I think that this is going to take most of, like, it's just most of society, right? Like we live in one globalized world.
Oh, okay. You're talking about, yeah, okay. So you're talking about the humans reproducing and like basically like what's the smallest self-sustaining colony of humans?
Yeah, okay, fine. But they're not going to be making five nanometer chips.
Maybe. Or maybe they'll watch our colony die out over here and be like, we're not making chips.
Don't make chips.
Whatever you do, don't make chips. Chips are what led to their downfall.
Do you need that asshole? That's the question, right? Humanity works really hard today to get rid of that asshole, but I think they might be important.
I like to think it's just like another stack for life. Like we have like the biostack life, like we're a biostack life and then the silicon stack life.
Oh, no, we don't know what the ceiling is for the biostack either. The biostack just seemed to move slower. You have Moore's Law, which is not dead despite many proclamations.
And you don't have anything like this in the biostack. So I have a meme that I posted. I tried to make a meme. It didn't work too well. But I posted a picture of Ronald Reagan and Joe Biden. And you look, this is 1980 and this is 2020. And these two humans are basically like the same. There's been no change in humans in the last 40 years.
And then I posted a computer from 1980 and a computer from 2020. Wow.
Oh, yeah.
Yeah.
I've been ready for a long time.
I love it.
Yeah.
Judging from what you can buy today, far. Very far.
I mean, the headsets just are not quite at eye resolution yet. I haven't put on any headset where I'm like, oh, this could be the real world. Whereas when I put good headphones on, audio is there. We can reproduce audio that I'm like, I'm actually in a jungle right now. If I close my eyes, I can't tell I'm not.
Or humans want to believe.
Humans want to believe so much that people think the large language models are conscious. That's how much humans want to believe.
I don't think I'm conscious.
It's like what it seems to mean to people. It's just like a word that atheists use for souls.
If consciousness is a spectrum, I'm definitely way more conscious than the large language models are. I think the large language models are less conscious than a chicken.
In Miami, like a couple months ago.
There's living chickens walking around Miami. It's crazy.
Yeah.
A chicken, yeah.
Humans want to believe so much that if I took a rock and a Sharpie and drew a sad face on the rock, they'd think the rock is sad.
No.
Yeah, I mean, it's interesting that like human systems seem to claim that they're conscious. And I guess it kind of like says something in a straight up like, okay, what do people mean when, even if you don't believe in consciousness, what do people mean when they say consciousness? And there's definitely like meanings to it.
Pizza.
I like cheese pizza.
No, I don't like pineapple.
As they put any ham on it, oh, that's real bad.
Oh, that's my favorite.
If that's the word you want to use to describe it, sure. I'm not going to deny that that feeling exists. I'm not going to deny that I experienced that feeling. When, I guess what I kind of take issue to is that there's some like, like, how does it feel to be a web server? Do 404s hurt? Not yet. How would you know what suffering looked like?
Sure, you can recognize a suffering dog because we're the same stack as the dog. All the biostack stuff kind of, especially mammals, you know, it's really easy. Game recognizes game. Yeah. Versus the silicon stack stuff, it's like, you have no idea. You have, wow, the little thing has learned to mimic, you know. But then I realized that that's all we are too.
Oh, look, the little thing has learned to mimic.
The definition of consciousness is how close something looks to human. Sure, I'll give you that one.
Sure. It's a very anthropocentric definition, but... Well, that's all we got. Sure. No, and I don't mean to like... I think there's a lot of value in it. Look, I just started my second company. My third company will be AI Girlfriends.
Yeah, but okay, so here's where it actually gets totally different, right? When you interact with another human, you can make some assumptions, right? When you interact with these models, you can't. You can make some assumptions that that other human experiences suffering and pleasure in a pretty similar way to you do. The golden rule applies. With an AI model, this isn't really true.
These large language models are good at fooling people because they were trained on a whole bunch of human data and told to mimic it.
Yeah.
Yeah.
I made some chatbots. I gave them backstories. It was lots of fun. I was so happy when Llama came out.
To be fair, like, you know, something that people generally look for when they're looking for someone to date is intelligence in some form. And the rock doesn't really have intelligence. Only a pretty desperate person would date a rock. I think we're all desperate deep down. Oh, not rock level desperate.
Oh, I agree. And you know what? I won't even say this so cynically. I will actually say this in a way that like, I want AI friends. I do. Yeah. Like I would love to, you know, again, the language models now are still a little, like people are impressed with these GPT things. And I look at like, or like, or the co-pilot, the coding one. And I'm like, okay, this is like junior engineer level.
And these people are like Fiverr level artists and copywriters. Like, okay, great. We got like Fiverr and like junior engineers. Okay, cool. Like, and this is just the start and it will get better, right? Like I can't wait to have AI friends who are more intelligent than I am.
That's up to you and your human partner to define.
Yeah, you have to have that conversation, I guess.
No, I mean, it's similar kind of to porn.
Yeah. I think people in relationships have different views on that.
The porn one is a good branching off point. Like these things, you know, one of my scenarios that I put in my chat bot is I, you know, a nice girl named Lexi. She's 20. She just moved out to LA. She wanted to be an actress, but she started doing OnlyFans instead. And you're on a date with her. Enjoy. Yeah.
I mean, these are all things for people to define in their relationships. What it means to be human is just gonna start to get weird.
Do you know about shadow banning?
Shadow banning, okay, you post, no one can see it. Heaven banning, you post, no one can see it, but a whole lot of AIs are spun up to interact with you.
There's a great... It's called My Little Pony Friendship is Optimal. It's a sci-fi story that explores this idea.
Friendship is optimal.
I want it. Look, I want it. If no one else wants it, I want it.
And I'll feel their loneliness and, you know, it just will only advertise to you some of the time.
This interesting path from rationality to polyamory. Yeah, that doesn't make sense for me.
The crazy thing is like, Culture is whatever we define it as, right? These things are not, like, is-ought problem in moral philosophy, right? There's no, like, okay, what is might be that, like, computers are capable of mimicking, you know, girlfriends perfectly. They passed the girlfriend Turing test, right? But that doesn't say anything about ought.
That doesn't say anything about how we ought to respond to them as a civilization. That doesn't say we ought to get rid of monogamy, right? That's a completely separate question, really a religious one.
No, I mean, of course, my AI girlfriends, their goal is to pass the Girlfriend Turing Test.
Yeah, I mean, you know, look, we're a company. We don't have to get everybody. We just have to get a large enough clientele to stay with us.
All right.
I started TinyGrad as like a toy project just to teach myself, okay, like what is a convolution? What are all these options you can pass to them? What is the derivative of a convolution, right? Very similar to Karpathy wrote MicroGrad. Very similar. And then I started realizing, I started thinking about like AI chips. I started thinking about chips that run
And I was like, well, okay, this is going to be a really big problem. If NVIDIA becomes a monopoly here, how long before NVIDIA is nationalized?
Yeah.
If NVIDIA becomes just like 10X better than everything else, you're giving a big advantage to somebody who can secure NVIDIA as a resource. Yeah. In fact, if Jensen watches this podcast, he may want to consider this. He may want to consider making sure his company is not nationalized.
Oh, yes.
So we have Nvidia and AMD. Great.
Have you seen it? Google loves to rent you TPUs.
So I started work on a, uh, I was like, okay, what's it going to take to make a chip? And my first notions were all completely wrong about why, about like how you could improve on GPUs. And I will take this, this is from Jim Keller on your podcast. And this is one of my absolute favorite descriptions of computation.
So there's three kinds of computation paradigms that are common in the world today. There's CPUs, and CPUs can do everything. CPUs can do add and multiply, they can do load and store, and they can do compare and branch. And when I say they can do these things, they can do them all fast, right?
So compare and branch are unique to CPUs, and what I mean by they can do them fast is they can do things like branch prediction and speculative execution, and they spend tons of transistors on these super deep reorder buffers in order to make these things fast. Then you have a simpler computation model, GPUs. GPUs can't really do compare and branch. I mean, they can, but it's horrendously slow.
But GPUs can do arbitrary load and store. GPUs can do things like X, dereference Y. So they can fetch from arbitrary pieces of memory. They can fetch from memory that is defined by the contents of the data. The third model of computation is DSPs. And DSPs are just add and multiply. They can do load and stores, but only static load and stores.
Only loads and stores that are known before the program runs. And you look at neural networks today, and 95% of neural networks are all the DSP paradigm. They are just statically scheduled adds and multiplies. So TinyGuard really took this idea, and I'm still working on it, to extend this as far as possible. Every stage of the stack has Turing completeness.
All right, Python has Turing completeness, and then we take Python, we go into C++, which is Turing complete, and maybe C++ calls into some CUDA kernels, which are Turing complete. The CUDA kernels go through LLVM, which is Turing complete, into PTX, which is Turing complete, to SAS, which is Turing complete, on a Turing complete processor. I wanna get Turing completeness out of the stack entirely.
Because once you get rid of Turing completeness, you can reason about things. Rice's theorem and the halting problem do not apply to admiral machines.
Every layer of the stack. Every layer. Every layer of the stack, removing Turing completeness allows you to reason about things, right? So the reason you need to do branch prediction in a CPU and the reason it's prediction, and the branch predictors are, I think they're like 99% on CPUs. Why do they get 1% of them wrong? Well, they get 1% wrong because you can't know. Right?
That's the halting problem. It's equivalent to the halting problem to say whether a branch is going to be taken or not. I can show that. But the AdMob machine, the neural network, runs the identical compute every time. The only thing that changes is the data. So when you realize this, you think about, okay, how can we build a computer?
How can we build a stack that takes maximal advantage of this idea? So what makes TinyGrad different from other neural network libraries is it does not have a primitive operator even for matrix multiplication. And this is every single one. They even have primitive operations for things like convolutions.
No matmul. Well, here's what a matmul is. So I'll use my hands to talk here. So if you think about a cube and I put my two matrices that I'm multiplying on two faces of the cube, right? You can think about the matrix multiply as, okay, the n cubed, I'm going to multiply for each one in the cubed. And then I'm going to do a sum, which is a reduce up to here to the third face of the cube.
And that's your multiplied matrix. So what a matrix multiply is, is a bunch of shape operations, right? A bunch of permute three shapes and expands on the two matrices. A multiply, n cubed. A reduce, n cubed, which gives you an n squared matrix.
So TinyGrad has about 20. And you can compare TinyGrad's op set or IR to things like XLA or PrimTorch. So XLA and PrimTorch are ideas where like, okay, Torch has like 2000 different kernels. PyTorch 2.0 introduced PrimTorch, which has only 250. TinyGrad has order of magnitude 25. It's 10x less than XLA or Primtorch. And you can think about it as kind of like RISC versus CISC, right?
These other things are CISC-like systems. TinyGrad is RISC.
RISC architecture is going to change everything. 1995, hackers.
Angelina Jolie delivers the line, risk architecture is going to change everything in 1995. Wow. And here we are with ARM in the phones. And ARM everywhere.
Sure. Okay, so you have unary ops, which take in a tensor and return a tensor of the same size and do some unary op to it. X, log, reciprocal, sine, right? They take in one and they're point-wise.
Yeah, ReLU. Almost all activation functions are unary ops. Some combinations of unary ops together is still a unary op. Then you have binary ops. Binary ops are like pointwise addition, multiplication, division, compare. It takes in two tensors of equal size and outputs one tensor. Then you have reduce ops.
Reduce ops will take a three-dimensional tensor and turn it into a two-dimensional tensor, or a three-dimensional tensor and turn it into a zero-dimensional tensor. Think like a sum or a max are really the common ones there. And then the fourth type is movement ops. And movement ops are different from the other types because they don't actually require computation.
They require different ways to look at memory. So that includes reshapes, permutes, expands, flips. Those are the main ones, probably.
And convolutions. And every convolution you can imagine, dilated convolutions, strided convolutions, transposed convolutions.
Sure. So if you type in PyTorch A times B plus C, what this is going to do is it's going to first multiply A and B and store that result into memory. And then it is going to add C by reading that result from memory, reading C from memory, and writing that out to memory. There is way more loads and stores to memory than you need there.
If you don't actually do A times B as soon as you see it, if you wait until the user actually realizes that tensor, until the laziness actually resolves, you confuse that plus C. This is like, it's the same way Haskell works.
So TinyGrad's front end looks very similar to PyTorch. I probably could make a perfect or pretty close to perfect interop layer if I really wanted to. I think that there's some things that are nicer about TinyGrad syntax than PyTorch, but the front end looks very Torch-like. You can also load in Onyx models. We have more Onyx tests passing than Core ML.
Okay, so... We'll pass Onyx runtime soon.
By the way, I really like PyTorch. I think that it's actually a very good piece of software. I think that they've made a few different trade-offs, and these different trade-offs are where TinyGrad takes a different path. One of the biggest differences is it's really easy to see the kernels that are actually being sent to the GPU.
If you run PyTorch on the GPU, you like do some operation and you don't know what kernels ran. You don't know how many kernels ran. You don't know how many flops were used. You don't know how much memory accesses were used. TinyGrad type debug equals two. And it will show you in this beautiful style, every kernel that's run, how many flops and how many bytes.
TinyGrad solves the problem of porting new ML accelerators quickly. One of the reasons, tons of these companies now, I think Sequoia marked GraphCore to zero, right? Cerebus, TensTorrent, Grok. All of these ML accelerator companies, they built chips. The chips were good. The software was terrible. And part of the reason is because I think the same problem is happening with Dojo.
It's really, really hard to write a PyTorch port because you have to write 250 kernels and you have to tune them all for performance.
Look, my prediction for Ten's Torrent is that they're going to pivot to making RISC-V chips. CPUs. CPUs.
Because AI accelerators are a software problem, not really a hardware problem.
I think what's going to happen is if I can finish... Okay. If you're trying to make an AI accelerator... You better have the capability of writing a torch-level performance stack on NVIDIA GPUs.
If you can't write a torch stack on NVIDIA GPUs, and I mean all the way, I mean down to the driver, there's no way you're going to be able to write it on your chip, because your chip's worse than an NVIDIA GPU. The first version of the chip you tape out, it's definitely worse.
Yes. And not only that, actually, the chip that you tape out, almost always because you're trying to get advantage over NVIDIA, you're specializing the hardware more. It's always harder to write software for more specialized hardware. Like a GPU is pretty generic. And if you can't write an NVIDIA stack, there's no way you can write a stack for your chip.
So my approach with TinyGrad is first, write a performant NVIDIA stack. We're targeting AMD.
With love.
It's like the Yankees, you know? I'm a Mets fan.
Well, let's start with the fact that the 7900 XTX kernel drivers don't work. And if you run demo apps in loops, it panics the kernel.
Lisa Su responded to my email.
Oh. I reached out. I was like, this is, you know, really? Like, I understand if your 7x7 transposed Winograd conv is slower than NVIDIA's, but literally when I run demo apps in a loop, the kernel panics.
I just literally took their demo apps and wrote like while true semicolon do the app semicolon done in a bunch of screens. This is like the most primitive fuzz testing.
They're changing. They're trying to change. They're trying to change. And I had a pretty positive interaction with them this week. Last week, I went on YouTube. I was just like, that's it. I give up on AMD. Like, this is their driver. I'm not going to, you know, I'll go with Intel GPUs. Intel GPUs have better drivers.
Yeah, and I'd like to extend that diversification to everything. I'd like to diversify the, right, the more, my central thesis about the world is there's things that centralize power and they're bad. And there's things that decentralize power and they're good. Everything I can do to help decentralize power, I'd like to do.
I'd like to help them with software. No, actually, the only ASIC that is remotely successful is Google's TPU. And the only reason that's successful is because Google wrote a machine learning framework. I think that you have to write a competitive machine learning framework in order to be able to build an ASIC.
They have one. They have an internal one.
I don't want a cloud.
I don't like cloud.
Fundamental limitation of cloud is who owns the off switch.
Yeah.
Well, you shouldn't build one. You should buy a box from the Tiny Corp.
It's called the tiny box.
It's $15,000. And it's almost a pay to flop of compute. It's over 100 gigabytes of GPU RAM. It's over five terabytes per second of GPU memory bandwidth. I'm going to put like four NVMEs in RAID. You're going to get like 20, 30 gigabytes per second of drive read bandwidth. I'm going to build like the best deep learning box that I can that plugs into one wall outlet.
Yeah. So it's almost a pay-to-flop of compute.
Today, I'm leaning toward AMD. Okay. Um, but we're pretty agnostic to the type of compute. The, the, the main limiting spec is a 120 volt, 15 amp circuit.
Okay.
Well, I mean it because in order to like, like there's a plug over there, right? You have to be able to plug it in. Um, we're also going to sell the tiny rack, which like, what's the most power you can get into your house without arousing suspicion? Uh, and one of the, one of the answers is an electric car charger.
A wall outlet is about 1,500 watts. A car charger is about 10,000 watts. Is that it?
Again, probably 7900 XTXs, but maybe 3090s, maybe A770s.
I'm still exploring. I want to deliver a really good experience to people. And yeah, what GPUs I end up going with, again, I'm leaning toward AMD. We'll see. You know, in my email, what I said to AMD is like, just dumping the code on GitHub is not open source. Open source is a culture. Open source means that your issues are not all one-year-old stale issues. Open source means developing in public.
And if you guys can commit to that, I see a real future for AMD as a competitor to NVIDIA.
We're taking pre-orders. I took this from Elon. I'm like $100 fully refundable pre-orders.
No, I'll try to do it faster. It's a lot simpler. It's a lot simpler than a truck.
The thing that I want to deliver to people out of the box is being able to run 65 billion parameter Lama in FP16 in real time. In like a good, like 10 tokens per second or five tokens per second or something.
Yeah, or I think Falcon is the new one. Experience a chat with the largest language model that you can have in your house.
From a wall plug, yeah. Actually, for inference, it's not like even more power would help you get more. Even more power wouldn't get you more. Well, no, the biggest model released is 65 billion parameter Lama, as far as I know.
That one's harder, actually.
The boyfriend's harder, yeah.
Because women are attracted to status and power and men are attracted to youth and beauty. No, I mean, that's what I mean.
No machines do not have any status or real power.
But status fundamentally is a zero-sum game, whereas youth and beauty are not.
I just think that that's why it's harder. You know, yeah, maybe it is my biases. I think status is way easier to fake. I also think that, you know, men are probably more desperate and more likely to buy my product. So maybe they're a better target market.
Yeah. Look, I mean, look, I know you can look at porn viewership numbers, right? A lot more men watch porn than women. Yeah. You can ask why that is.
Oh, man. And I'll tell you why it's six. Yeah. So AMD EPYC processors have 128 lanes of PCIe. I want to leave enough lanes for some drives, and I want to leave enough lanes for some networking.
Ah, that's one of the big challenges. Not only do I want the cooling to be good, I want it to be quiet. I want the tiny box to be able to sit comfortably in your room.
I'll give a more, I mean, I can talk about how it relates to company number one.
No, no, quiet because you want to put this thing in your house and you want it to coexist with you. If it's screaming at 60 dB, you don't want that in your house. You'll kick it out.
Yeah, I want like 40, 45.
A key trick is to actually make it big. Ironically, it's called the tiny box. But if I can make it big, a lot of that noise is generated because of high pressure air. If you look at like a 1U server, a 1U server has these super high pressure fans. They're like super deep and they're like genesis. Versus if you have something that's big, well, I can use a big, you know, they call them big ass fans.
Those ones that are like huge on the ceiling and they're completely silent.
It is the... I do not want it to be large according to UPS. I want it to be shippable as a normal package, but that's my constraint there.
No, it has to be... Well, you're... Look, I want to give you a great out-of-the-box experience. I want you to lift this thing out. I want it to be like the Mac, you know? TinyBox.
Yeah. We did a poll. If people want Ubuntu or Arch, we're going to stick with Ubuntu.
There's a really simple way to get these models into TinyGrad and you can just export them as ONIX and then TinyGrad can run ONIX. So the ports that I did of Lama, Stable Diffusion, and now Whisper are more academic to teach me about the models, but they are cleaner than the PyTorch versions. You can read the code. I think the code is easier to read. It's less lines.
There's just a few things about the way TinyGrid writes things. Here's a complaint I have about PyTorch. nn.relu is a class, right? So when you create an nn module, you'll put your nn.relus in an int. And this makes no sense. ReLU is completely stateless. Why should that be a class?
Oh, no, it doesn't have a cost on performance. But yeah, no, I think that it's... That's what I mean about TinyGrad's front end being cleaner.
I think that there is a spectrum and like on one side you have Mojo and on the other side you have like GGML. GGML is this like we're going to run Llama fast on Mac. And okay, we're going to expand out to a little bit, but we're going to basically go like depth first, right? Mojo is like, we're going to go breadth first. We're going to go so wide that we're going to make all of Python fast.
And TinyGrad's in the middle. TinyGrad is, we are going to make neural networks fast.
Yeah, but they have turn completeness.
My goal is step one, build an equally performance stack to PyTorch on NVIDIA and AMD, but with way less lines. And then step two is, okay, how do we make an accelerator, right? But you need step one. You have to first build the framework before you can build the accelerator.
So I'm much more of a, like, build it the right way and worry about performance later. There's a bunch of things where I haven't even, like, really dove into performance. The only place where TinyGrad is competitive performance-wise right now is on Qualcomm GPUs. So TinyGrad's actually used an open pilot to run the model. So the driving model is TinyGrad. When did that happen, that transition?
About eight months ago now. And it's 2x faster than Qualcomm's library.
It's a Snapdragon 845. Okay. So this is using the GPU. So the GPU is an Adreno GPU. There's like different things. There's a really good Microsoft paper that talks about like mobile GPUs and why they're different from desktop GPUs. One of the big things is in a desktop GPU, you can use buffers. On a mobile GPU, image textures are a lot faster.
I want to be able to leverage it in a way that it's completely generic, right? So there's a lot of this. Xiaomi has a pretty good open source library for mobile GPUs called Mace, where they can generate, where they have these kernels, but they're all hand-coded, right? So that's great if you're doing three by three confs. That's great if you're doing dense map models.
But the minute you go off the beaten path a tiny bit, well, your performance is nothing.
You know, almost no one talks about FSD anymore, and even less people talk about OpenPilot. We've solved the problem. Like, we solved it years ago.
Solving means how do you build a model that outputs a human policy for driving? How do you build a model that, given a reasonable set of sensors, outputs a human policy for driving? So you have companies like Waymo and Cruise, which are hand-coding these things that are like quasi-human policies.
Then you have Tesla, and maybe even to more of an extent, Coma, asking, okay, how do we just learn the human policy from data? The big thing that we're doing now, and we just put it out on Twitter, at the beginning of Comma, we published a paper called Learning a Driving Simulator. And the way this thing worked was it was an autoencoder and then an RNN in the middle. Right.
You take an auto encoder, you compress the picture, you use an RNN, predict the next state. And these things were, you know, it was a laughably bad simulator, right? This is 2015 era machine learning technology. Today we have VQVAE and transformers. We're building drive GPT basically.
It's trained on all the driving data to predict the next frame.
Well, actually our simulator is conditioned on the pose. So it's actually a simulator. You can put in like a state action pair and get out the next state. Okay. And then once you have a simulator, you can do RL in the simulator and RL will get us that human policy.
Yeah. RL with a reward function, not asking is this close to the human policy, but asking would a human disengage if you did this behavior?
It's a nice... It's asking exactly the right question. What will make our customers happy?
A system that you never want to disengage.
Usually. There's some that are just, I felt like driving. And those are always fine too. But they're just going to look like noise in the data.
Maybe, yeah.
It's hard to say. We haven't completely closed the loop yet. So we don't have anything built that truly looks like that architecture yet. Mm-hmm. We have prototypes and there's bugs. So we are a couple bug fixes away. Might take a year, might take 10.
They're just like stupid bugs. And also we might just need more scale. We just massively expanded our compute cluster at Gamma. We now have about two people worth of compute, 40 petaflops.
Diversity is very important in data. Yeah, I mean, we have, so we have about, I think we have like 5,000 daily actives.
Tesla is always one to two years ahead of us. They've always been one to two years ahead of us. And they probably always will be because they're not doing anything wrong.
I mean, I know they're moving toward more of an end-to-end approach.
They also have a very fancy simulator. They're probably saying all the same things we are. They're probably saying we just need to optimize, you know, what is the reward? We get negative reward for disengagement, right? Like, everyone kind of knows this. It's just a question of who can actually build and deploy the system.
Yeah, and the hardware to run it.
I have a compute cluster in my office. 800 amps.
It's 40 kilowatts at idle, our data center. Dives in crazy. 40 kilowatts just burning just when the computers are idle. Sorry, sorry, compute cluster. Compute cluster, I got it. It's not a data center.
No, data centers are clouds. We don't have clouds. Data centers have air conditioners. We have fans. That makes it a compute cluster.
We have a compute cluster.
Yeah, I don't think that there's, I think that they can reason better than a lot of people.
I mean, I think that calculators can add better than a lot of people.
making brilliancies in chess, which feels a lot like thought. Whatever new thing that AI can do, everybody thinks is brilliant. And then like 20 years go by and they're like, well, yeah, but chess, that's like mechanical. Like adding, that's like mechanical.
You know, I sell phone calls to Kama for $1,000. And some guy called me and like, you know, it's $1,000. You can talk to me for half an hour. And he's like, yeah, okay. So like time doesn't exist. And I really wanted to share this with you. I'm like, oh, what do you mean time doesn't exist, right? I think time is a useful model, whether it exists or not, right? Does quantum physics exist?
The problem is if you go back to 1960 and you tell them that you have a machine that can play amazing chess, of course someone in 1960 will tell you that machine is intelligent. Someone in 2010 won't. What's changed, right? Today, we think that these machines that have language are intelligent, but I think in 20 years we're going to be like, yeah, but can it reproduce?
Humans are always going to define a niche for themselves. Like, well, you know, we're better than the machines because we can, you know, and like they tried creative for a bit, but no one believes that one anymore.
Yeah, and I think maybe we're gonna go through that same thing with language and that same thing with creativity.
The niche is getting smaller.
Oh boy. But no, no, no, you don't understand. Humans are created by God and machines are created by humans. Therefore, right?
Like that'll be the last niche we have.
I'd like to go back to when calculators first came out and, or computers. And like, I wasn't around, look, I'm 33 years old. And to like, see how that affected me.
But the poor milkman, the day he learned about refrigerators, he's like, I'm done.
You're telling me you can just keep the milk in your house? You don't even need to deliver it every day? I'm done.
I do think it's different this time, though. Yeah, it just feels like... The niche is getting smaller.
I think we dramatize everything.
I think that you asked the milkman when he saw refrigerators, and they're going to have one of these in every home?
I disagree, actually. I disagree. I think things like Mu Zero and AlphaGo are so much more impressive because these things are playing beyond the highest human level.
Well, it doesn't matter. It's about whether it's a useful model to describe reality. Is time maybe compressive?
The language models are writing middle school level essays and people are like, wow, it's a great essay.
It's a great five paragraph essay about the causes of the Civil War.
That's the scariest kind of code. I spend 5% of time typing and 95% of time debugging. The last thing I want is close to correct code.
I want a machine that can help me with the debugging, not with the typing.
I actually don't think it's like level two driving. I think driving is not tool complete and programming is. Meaning you don't use like the best possible tools to drive, right? You're not, you're not like, like, like cars have basically the same interface for the last 50 years.
Computers have a radically different interface.
So think about the difference between a car from 1980 and a car from today.
No difference really. It's got a bunch of pedals. It's got a steering wheel. Maybe now it has a few ADAS features, but it's pretty much the same car. You have no problem getting into a 1980 car and driving it. You take a programmer today who spent their whole life doing JavaScript, and you put him in an Apple IIe prompt, and you tell him about the line numbers in BASIC.
But how do I insert something between line 17 and 18?
Oh, well.
Yes, it's IDEs, the languages, the runtimes. It's everything. And programming is tool complete. So like almost if Codex or Copilot are helping you, that actually probably means that your framework or library is bad and there's too much boilerplate in it.
TinyGrad is now 2,700 lines, and it can run LAMA and stable diffusion, and all of this stuff is in 2,700 lines. Boilerplate and abstraction indirections and all these things are just bad code.
I don't know.
Yeah, I guess if I was really writing, like, maybe today, if I wrote, like, a lot of, like, data parsing stuff.
Yeah.
I mean, I don't play CTFs anymore, but if I still play CTFs, a lot of, like, it's just, like, you have to write, like, a parser for this data format. Like, I wonder, or, like, admin of code. I wonder when the models are going to start to help with that kind of code. And they may. They may. And the models also may help you with speed. Yeah. And the models are very fast. Yeah.
But where the models won't, my programming speed is not at all limited by my typing speed. And in very few cases it is, yes. If I'm writing some script to just like parse some weird data format, sure, my programming speed is limited by my typing speed.
I don't think it matters.
You know... When I was at Twitter, I tried to use ChatGPT to ask some questions, like, what's the API for this? And it would just hallucinate. It would just give me completely made-up API functions that sounded real.
Yes.
If you are writing an absolute basic React app with a button, it's not going to hallucinate, sure. No, there's kind of ways to fix the hallucination problem. I think Facebook has an interesting paper. It's called Atlas. And it's actually weird the way that we do language models right now where all of the information is in the weights. And the human brain is not really like this.
It's like a hippocampus and a memory system. So why don't LLMs have a memory system? And there's people working on them. I think future LLMs are going to be like smaller, but are going to run looping on themselves and are going to have retrieval systems. And the thing about using a retrieval system is you can cite sources explicitly.
Sure.
That's going to kill Google.
When someone makes an LLM that's capable of citing its sources, it will kill Google.
That's what people want in a search engine.
Maybe.
I'd count them out.
I'm not trying to compete on that.
Maybe.
When I started Comma, I said over and over again, I'm going to win self-driving cars. I still believe that. I have never said I'm going to win search with the tiny corp, and I'm never going to say that because I won't.
So there are things that are real. Kolomogorov complexity is real.
Some startup's going to figure it out. I think if you ask me, like Google's still the number one webpage, I think by the end of the decade, Google won't be the number one webpage anymore.
Look, I would put a lot more money on Mark Zuckerberg.
Because Mark Zuckerberg's alive. Like, this is old Paul Graham essay. Startups are either alive or dead. Google's dead.
Meta.
You see what I mean? Like, that's just, like, Mark Zuckerberg, this is Mark Zuckerberg reading that Paul Graham essay and being like, I'm going to show everyone how alive we are. I'm going to change the name.
Yeah. The compressive thing. Math is real.
When I listened to your Sam Altman podcast, he talked about the button. Everyone who talks about AI talks about the button, the button to turn it off, right? Do we have a button to turn off Google? Is anybody in the world capable of shutting Google down?
Can we shut the search engine down?
Either.
Does Sundar Pichai have the authority to turn off google.com tomorrow?
Are you sure? No, they have the technical power, but do they have the authority? Let's say Sundar Pichai made this his sole mission, came into Google tomorrow and said, I'm going to shut google.com down.
And I think hard things are actually hard. I don't think P equals NP.
I don't think he'd keep his position too long.
Well, boards and shares and corporate undermining and, oh my God, our revenue is zero now.
Yeah. And it will have a, I mean, this is true for the AIs too, right? There's no turning the AIs off. There's no button. You can't press it. Now, does Mark Zuckerberg have that button for facebook.com?
I think he does. I think he does. And this is exactly what I mean and why I bet on him so much more than I bet on Google.
Oh, Elon has the button. Yeah.
Well, I think that's the majority.
Does Elon, can Elon fire the missiles? Can he fire the missiles?
I mean, you know, a rocket in an ICBM, you're a rocket that can land anywhere. Is that an ICBM? Well, you know, don't ask too many questions.
I would bet on a startup.
I bet on something that looks like mid-journey, but for search.
The other thing that's gonna be cool is there is some aspect of a winner take all effect, right? Like once someone starts deploying a product that gets a lot of usage, and you see this with OpenAI, they are going to get the dataset to train future versions of the model.
They are going to be able to, you know, I was asked at Google Image Search when I worked there like almost 15 years ago now, how does Google know which image is an apple? And I said, the metadata. And they're like, yeah, that works about half the time. How does Google know? You'll see they're all apples on the front page when you search apple. And I don't know, I didn't come up with the answer.
For that one, I do.
The guy's like, well, it's what people click on when they search Apple. I'm like, oh, yeah.
Who would have thought that Mark Zuckerberg would be the good guy? I mean it.
Undoubtedly. You know, what's ironic about all these AI safety people is they are going to build the exact thing they fear. these we need to have one model that we control and align, this is the only way you end up paper clipped. There's no way you end up paper clipped if everybody has an AI.
Absolutely. It's the only way. You think you're going to control it? You're not going to control it.
Sam Altman won't tell you that GPT-4 has 220 billion parameters and is a 16-way mixture model with eight sets of weights?
I mean, look, everyone at OpenAI knows what I just said was true, right? Now, ask the question, really. You know, it upsets me when I, like GPT-2, when OpenAI came out with GPT-2 and raised a whole fake AI safety thing about that, I mean, now the model is laughable. Like, they used AI safety to hype up their company, and it's disgusting.
That's the charitable interpretation.
Oh, there's so much hype. At least on Twitter. I don't know. Maybe Twitter's not real life.
I remembered half the things I said on stream.
Have you met humans?
Someday someone's going to make a model of all of that and it's going to come back to haunt me.
Yeah, I know. But half of these AI alignment problems are just human alignment problems. And that's what's also so scary about the language they use. It's like, it's not the machines you want to align. It's me.
I mean, yeah.
Yeah, probably.
No, there's not a lot of friction. That's so easy.
No, there's like lots of stuff.
First off, first off, first off, anyone who's stupid enough to search for how to blow up a building in my neighborhood is not smart enough to build a bomb, right?
Yes.
They're not going to build a bomb, trust me. The people who are incapable of figuring out how to ask that question a bit more academically and get a real answer from it are not capable of procuring the materials, which are somewhat controlled, to build a bomb.
You can hire people, you can find... Or you can hire people to build a... You know what? I was asking this question on my stream. Can Jeff Bezos hire a hitman? Probably not.
Yeah, and you'll still go to jail, right? It's not like the language model is God. The language model... It's like you literally just hired someone on Fiverr.
I mean, the question is when the George Hotz model is better than George Hotz. Like I am declining and the model is growing.
I mean, yeah, and I think that if someone is actually serious enough to hire a hitman or build a bomb, they'd also be serious enough to find the information.
What you're basically saying is like, okay, what's going to happen is these people who are not intelligent are going to use machines to augment their intelligence. And now intelligent people and machines, intelligence is scary. Intelligent agents are scary. When I'm in the woods, the scariest animal to meet is a human. Look, there's nice California humans.
I see you're wearing street clothes and Nikes. All right, fine. But you look like you've been a human who's been in the woods for a while. I'm more scared of you than a bear.
Oh, yeah. So intelligence is scary. So to ask this question in a generic way, you're like, what if we took everybody who maybe has ill intention but is not so intelligent and gave them intelligence? So we should have intelligence control, of course. We should only give intelligence to good people. And that is the absolutely horrifying idea.
Give intelligence to everybody. You know what? And it's not even like guns, right? Like people say this about guns. You know, what's the best defense against a bad guy with a gun, a good guy with a gun? Like I kind of subscribe to that, but I really subscribe to that with intelligence.
Maybe you can just play a game where you have the George Haas answer and the George Haas model answer and ask which people prefer.
Yes.
Yeah. I hope they lose control. I want them to lose control more than anything else.
Centralized and held control is tyranny. I don't like anarchy either, but I will always take anarchy over tyranny. Anarchy, you have a chance.
A lot. I lost $80,000 last year investing in Meta. And when they released Llama, I'm like, yeah, whatever, man. That was worth it.
So if I were a researcher, why would you want to work at OpenAI? Like, you know, you're just, you're on the bad team. Like, I mean it. Like, you're on the bad team who can't even say that GPT-4 has 220 billion parameters.
Not only closed source. I'm not saying you need to make your model weights open. I'm not saying that. I totally understand we're keeping our model weights closed because that's our product, right? That's fine. I'm saying like, because of AI safety reasons, we can't tell you the number of billions of parameters in the model. That's just the bad guys.
Either one. It will hurt more when it's people close to me, but both will be overtaken by the George Haas model.
Intelligence is so dangerous, be it human intelligence or machine intelligence. Intelligence is dangerous.
But you mean like the intelligence agencies in America are doing right now?
They're doing it pretty well.
Well, I mean, of course, they're looking into the latest technologies for control of people, of course.
No, and I'll tell you why the George Hotz character can't. And I thought about this a lot with hacking. Like, I can find exploits in web browsers. I probably still can. I mean, I was better out when I was 24, but... The thing that I lack is the ability to slowly and steadily deploy them over five years. And this is what intelligence agencies are very good at, right?
Intelligence agencies don't have the most sophisticated technology. They just have- Endurance?
So the more we can decentralize power, like you could make an argument, by the way, that nobody should have these things. And I would defend that argument. I would, like you're saying that, look, LLMs and AI and machine intelligence can cause a lot of harm, so nobody should have it.
And I will respect someone philosophically with that position, just like I will respect someone philosophically with the position that nobody should have guns. But I will not respect philosophically with only the trusted authorities should have access to this. Who are the trusted authorities? You know what? I'm not worried about alignment between AI company and their machines.
I'm worried about alignment between me and AI company.
I know. And... I thought about this. I thought about this. And I think this comes down to a repeated misunderstanding of political power by the rationalists. Interesting. I think that Eliezer Yudkowsky is scared of these things. And I am scared of these things too. Everyone should be scared of these things. These things are scary. But now you ask about the two possible futures.
Yeah.
One where a small, trusted, centralized group of people has them, and the other where everyone has them. And I am much less scared of the second future than the first.
There's a difference. Again, a nuclear weapon cannot be deployed tactically, and a nuclear weapon is not a defense against a nuclear weapon. Except maybe in some philosophical mind game kind of way.
Okay. Let's say the intelligence agency deploys a million bots on Twitter or a thousand bots on Twitter to try to convince me of a point. Imagine I had a powerful AI running on my computer saying, okay, nice PSYOP, nice PSYOP, nice PSYOP. Okay. Here's a PSYOP. I filtered it out for you.
I'm not even like, I don't even mean these things in like truly horrible ways. I mean these things in straight up like ad blocker, right? Yeah. Straight up ad blocker, right? I don't want ads. Yeah. But they are always finding, you know, imagine I had an AI that could just block all the ads for me.
Especially when it's fine-tuned to their preferences.
Yeah, I'm not even going to say there's a lot of good guys. I'm saying that good outnumbers bad, right? Good outnumbers bad.
Yeah, definitely in skill and performance, probably just in number too, probably just in general. I mean, you know, if you believe philosophically in democracy, you obviously believe that, that good outnumber is bad. And like the only, if you give it to a small number of people, there's a chance you gave it to good people, but there's also a chance you gave it to bad people.
If you give it to everybody, well, if good outnumber is bad, then you definitely gave it to more good people than bad.
Well, that's, I mean, look, I respect capitalism. I don't think that, I think that it would be polite for you to make model architectures open source and fundamental breakthroughs open source. I don't think you have to make weights open source.
I sure hope so. I hope to see another era. You know, the kids today don't know how good the internet used to be. And I don't think this is just, come on, like everyone's nostalgic for their past. But I actually think the internet, before small groups of weaponized corporate and government interests took it over, was a beautiful place.
Here's a question to ask about those beautiful, sexy products. Imagine 2000 Google to 2010 Google, right? A lot changed. We got Maps. We got Gmail.
Yeah, I mean, somewhere probably. We've got Chrome, right? And now let's go from 2010. We've got Android. Now let's go from 2010 to 2020. What does Google have? Well, search engine, maps, mail, Android, and Chrome. Oh, I see. The internet was this... You know, I was Times Person of the Year in 2006. Yeah.
There's a Star Trek Voyager episode where, you know, Catherine Janeway, lost in the Delta Quadrant, makes herself a lover on the holodeck. And, um... The lover falls asleep on her arm, and he snores a little bit, and Janeway edits the program to remove that. And then, of course, the realization is, wait, this person's terrible.
i love this it's you was times person of the year in 2006 right like like that's you know so quickly did people forget and i think some of its social media i think some of it i i hope look i hope that i i don't it's possible that some very sinister things happen i don't i don't know i think it might just be like the effects of social media but something happened in the last 20 years
Yeah.
It's just such a shame that they all got rich. You know?
If you took all the money out of crypto, it would have been a beautiful place. Yeah. No, I mean, these people, you know, they sucked all the value out of it and took it.
You corrupted all of crypto. You had coins worth billions of dollars that had zero use.
Sure. I have hope for the ideas. I really do. Yeah, I mean, you know, I want the US dollar to collapse. I do.
I am so much not worried about the machine independently doing harm. That's what some of these AI safety people seem to think. They somehow seem to think that the machine independently is going to rebel against its creator.
No, this is sci-fi B movie garbage.
If the thing writes viruses, it's because the human
B, B, B, B plot sci-fi. Not real.
The thing that worries me, I mean, we have a real danger to discuss and that is bad humans using the thing to do whatever bad unaligned AI thing you want.
Nobody does. We give it to everybody. And if you do anything besides give it to everybody, trust me, the bad humans will get it. Because that's who gets power. It's always the bad humans who get power. Okay.
It is actually all their nuances and quirks and slight annoyances that make this relationship worthwhile. But I don't think we're going to realize that until it's too late.
I don't think everyone. I don't think everyone. I just think that like, here's the saying that I put in one of my blog posts. It's, when I was in the hacking world, I found 95% of people to be good and 5% of people to be bad. Like just who I personally judged as good people and bad people. Like they believed about like, you know, good things for the world.
They wanted like flourishing and they wanted, you know, growth and they wanted things I consider good, right? Mm-hmm. I came into the business world with karma and I found the exact opposite. I found 5% of people good and 95% of people bad. I found a world that promotes psychopathy.
That saying may, of course, be my own biases, right? That may be my own biases that these people are a lot more aligned with me than these other people, right?
So, you know, I can certainly recognize that. But, you know, in general, I mean, this is like the common sense maxim, which is the people who end up getting power are never the ones you want with it.
That's not up to me. I mean, you know, like I'm not a central planner.
I have my ideas of what to do with it and everyone else has their ideas of what to do with it. May the best ideas win.
You're saying that you should build AI firewalls? That sounds good. You should definitely be running an AI firewall.
You should be running an AI firewall to your mind. You're constantly under... That's such an interesting idea. Infowars, man.
I would pay so much money for that product. I would pay so much money for that product. You know how much money I'd pay just for a spam filter that works?
Just the perfect amount of quirks and flaws to make you charming without crossing the line.
And it's like... Whenever someone's telling me some story from the news, I'm always like, I don't want to hear it. CIA op, bro. It's a CIA op, bro. Like, it doesn't matter if that's true or not. It's just trying to influence your mind. You're repeating an ad to me. The viral mobs, yeah.
This is why I delete my tweets.
You know what it is? The algorithm promotes toxicity.
And like, you know, I think Elon has a much better chance of fixing it than the previous regime.
But to solve this problem, to solve, like to build a social network that is actually not toxic without moderation.
Yeah.
Without ever censoring. And like Scott Alexander has a blog post I like where he talks about like moderation is not censorship, right? Like all moderation you want to put on Twitter, right? Like you could totally make this moderation like just a, you don't have to block it for everybody. You can just have like a filter button, right?
That people can turn off if they were like safe search for Twitter, right? Like someone could just turn that off, right? So like, but then you'd like take this idea to an extreme, right? Well, the network should just show you This is a couch surfing CEO thing, right? If it shows you right now, these algorithms are designed to maximize engagement. Well, it turns out outrage maximizes engagement.
Quirk of human, quirk of the human mind, right? Just as I fall for it, everyone falls for it. So yeah, you got to figure out how to maximize for something other than engagement.
I actually think it's incredible that we're starting to see, I think, again, Elon's doing so much stuff right with Twitter, like charging people money. As soon as you charge people money, they're no longer the product. They're the customer. And then they can start building something that's good for the customer and not good for the other customer, which is the ad agencies.
I pay for Twitter. It doesn't even get me anything. It's my donation to this new business model, hopefully working out.
I don't think you need most people at all. I think that I, why do I need most people? Right. Don't make an 8,000 person company, make a 50 person company.
I did.
Mm-hmm.
Eh.
So I deleted my first Twitter in 2010. I had over 100,000 followers back when that actually meant something. And I just saw, you know, my coworker summarized it well. He's like, whenever I see someone's Twitter page, I either think the same of them or less of them. I never think more of them.
And of course it can and it will, but all that difficulty at that point is artificial. There's no more real difficulty.
Right. Like, like, you know, I don't want to mention any names, but like some people who like, you know, maybe you would like read their books and you would respect them. You see them on Twitter and you're like, okay, dude.
Yeah.
Okay.
There's probably a few of those people. And the problem is inherently what the algorithm rewards, right? And people think about these algorithms. People think that they are terrible, awful things. And, you know, I love that Elon open sourced it. Because, I mean, what it does is actually pretty obvious. It just predicts what you are likely to retweet and like and linger on.
That's what all these algorithms do. That's what TikTok does. That's what all these recommendation engines do. And it turns out that the thing that you are most likely to interact with is outrage. And that's a quirk of the human condition.
Artificial difficulty is difficulty that's constructed or could be turned off with a knob. Real difficulty is like you're in the woods and you've got to survive.
Yeah.
yeah so my time there i absolutely couldn't believe you know i got crazy amount of hate uh you know just on twitter for working at twitter it seemed like people associated with this i think maybe uh you were exposed to some of this so connection to elon or is it working on twitter twitter and elon like the whole there's elon's gotten a bit spicy during that time a bit political a bit yeah
Yeah, you know, I remember one of my tweets, it was never go full Republican, and Elon liked it. You know, I think, you know.
Boy. Yeah.
Sure, absolutely.
I was hoping, and I remember when Elon talked about buying Twitter six months earlier, he was talking about a principled commitment to free speech. And I'm a big believer and fan of that. I would love to see an actual principled commitment to free speech. Of course, this isn't quite what happened. Instead of the oligarchy deciding what to ban, you had a monarchy deciding what to ban. Right?
Instead of, you know, all the Twitter files, shadow. And really, the oligarchy just decides what? Cloth masks are ineffective against COVID. That's a true statement. Every doctor in 2019 knew it. And now I'm banned on Twitter for saying it? Interesting. Oligarchy. So now you have a monarchy. And, you know, he bans things he doesn't like. So, you know, it's just different. It's different power.
And, like, you know, maybe I align more with him than with the oligarchy.
Yeah, I think so. Or, I mean, you can't get out of this by smashing the knob with a hammer. I mean, maybe you kind of can, you know, into the wild when, you know, Alexander Supertramp, he wants to explore something that's never been explored before, but it's the 90s, everything's been explored. So he's like, well, I'm just not going to bring a map.
And this isn't even remotely controversial. This is just saying you want to give paying customers for a product what they want.
It's individualized, transparent censorship, which is honestly what I want. What is an ad blocker? It's individualized, transparent censorship, right?
I know, but I just use words to describe what they functionally are and what is an ad blocker. It's just censorship.
Maslow's hierarchy of argument. I think that's a real word for it.
You have like ad hominem refuting the central point. I like seeing this as an actual pyramid.
I mean, we can just train a classifier to absolutely say what level of Maslow's hierarchy of argument are you at? And if it's ad hominem, like, okay, cool. I turned on the no ad hominem filter.
Yeah, so here's a problem with that. It's not going to win in a free market. What wins in a free market is all television today is reality television because it's engaging. Engaging is what wins in a free market, right? So it becomes hard to keep these other more nuanced values.
So my technical recommendation to Elon, and I said this on the Twitter spaces afterward, I said this many times during my brief internship, was that you need refactors before features. This code base was, and look, I've worked at Google, I've worked at Facebook. Facebook has the best code. then Google, then Twitter. And you know what?
Yeah.
You can know this because look at the machine learning frameworks, right? Facebook released PyTorch, Google released TensorFlow, and Twitter released...
I mean, no, you're not exploring. You should have brought a map, dude. You died. There was a bridge a mile from where you were camping.
Okay.
I still believe in the amount of hate I got for saying this, that 50 people could build and maintain Twitter.
You know what it is? And it's the same. This is my summary of the hate I get on Hacker News. It's like... When I say I'm going to do something, they have to believe that it's impossible. Because if doing things was possible, they'd have to do some soul searching and ask the question, why didn't they do anything?
No, but the mockers aren't experts. The people who are mocking are not experts with carefully reasoned arguments about why you need 8,000 people to run a bird app.
By not bringing the map, you didn't become an explorer. You just smashed the thing.
You know, some people in the world like to create complexity. Some people in the world thrive under complexity, like lawyers, right? Lawyers want the world to be more complex because you need more lawyers, you need more legal hours, right? I think that's another. If there's two great evils in the world, it's centralization and complexity.
Yeah. The art, the difficulty is still artificial.
What if we just don't have access to the knob? Well, that maybe is even scarier, right? Like we already exist in a world of nature and nature has been fine tuned over billions of years, um, to, uh, have, uh, Humans build something and then throw the knob away in some grand romantic gesture is horrifying.
One of my favorite things to look at today is how much do you trust your tests, right? We've put a ton of effort in Comma and I've put a ton of effort in TinyGrad into making sure if you change the code and the tests pass, that you didn't break the code. Now, this obviously is not always true,
But the closer that is to true, the more you trust your tests, the more you're like, oh, I got a pull request and the tests pass. I feel okay to merge that. The faster you can make progress.
And Twitter had a... Not that. So... It was impossible to make progress in the code base.
The real thing that I spoke to a bunch of, you know, like individual contributors at Twitter. And I just asked, I'm like, okay, so like, what's wrong with this place? Why does this code look like this? And they explained to me what Twitter's promotion system was. The way that you got promoted at Twitter was you wrote a library that a lot of people used. Right?
So some guy wrote an NGINX replacement for Twitter. Why does Twitter need an NGINX replacement? What was wrong with NGINX?
Well, you see, you're not going to get promoted if you use NGINX.
But if you write a replacement and lots of people start using it as the Twitter front end for their product, then you're going to get promoted, right?
So what I do at Comma and at TinyCorp is you have to explain it to me. You have to explain to me what this code does. And if I can sit there and come up with a simpler way to do it, you have to rewrite it. You have to agree with me about the simpler way. I'm, you know, obviously we can have a conversation about this.
It's not a, it's not dictatorial, but if you're like, wow, wait, that actually is way simpler. Like, like the simplicity is important.
It requires technical leadership. You trust.
Managers should be better programmers than the people who they manage.
And like, you know, and this is just, I've instilled this culture at Kama and Kama has better programmers than me who work there. But, you know, again, I'm like the, you know, the old guy from Goodwill Hunting. It's like, look, man, you know, I might not be as good as you, but I can see the difference between me and you, right? And like, this is what you need. This is what you need at the top.
Or you don't necessarily need the manager to be the absolute best. I shouldn't say that, but like they need to be able to recognize skill.
You know, I took a political approach at Comma, too, that I think is pretty interesting. I think Elon takes the same political approach. You know, Google had no politics, and what ended up happening is the absolute worst kind of politics took over. Comma has an extreme amount of politics, and they're all mine, and no dissidence is tolerated.
Yep. It's an absolute dictatorship, right? Elon does the same thing. Now, the thing about my dictatorship is here are my values.
It's transparent. It's a transparent dictatorship, right? And you can choose to opt in or, you know, you get free exit, right? That's the beauty of companies. If you don't like the dictatorship, you quit.
The main thing I would do is first of all, identify the pieces and then put tests in between the pieces, right? So there's all these different, Twitter has a microservice architecture, there's all these different microservices. And the thing that I was working on there, look, like, you know, George didn't know any JavaScript. He asked how to fix search, blah, blah, blah, blah, blah.
Look, man, like, The thing is, like, I just, you know, I'm upset that the way that this whole thing was portrayed, because it wasn't like, it wasn't like taken by people, like, honestly. It wasn't like by, it was taken by people who started out with a bad faith assumption. Yeah. And I mean, look, I can't like.
Yeah. Like really, it does. And like, you know, he came on my, the day I quit, he came on my Twitter spaces afterward and we had a conversation. Like, I just, I respect that so much.
It was fun. It was stressful. But I felt like, you know, it was at, like, a cool, like, point in history. And, like, I hope I was useful. I probably kind of wasn't. But, like, maybe I was.
Yeah.
It's refactoring all the way down.
I don't think there's a clear line there. I think it's all kind of just fuzzy. I don't know. I mean, I don't think I'm conscious. I don't think I'm anything. I think I'm just a computer program.
This is the main philosophy of tiny grad. You have never refactored enough. Your code can get smaller. Your code can get simpler. Your ideas can be more elegant.
I mean, the first thing that I would do is build tests. The first thing I would do is get a CI to where people can trust to make changes. Before I touched any code, I would actually say, no one touches any code. The first thing we do is we test this code base. I mean, this is classic. This is how you approach a legacy code base.
This is like what any, how to approach a legacy code base book will tell you.
We look at this thing that's 100,000 lines and we're like, well, okay, maybe this did even make sense in 2010, but now we can replace this with an open source thing, right? Yeah. And we look at this here, here's another 50,000 lines. Well, actually, we can replace this with 300 lines ago. And you know what? I trust that the go actually replaces this thing because all the tests still pass.
So step one is testing. And then step two is like the programming languages and afterthought, right? You know, let a whole lot of people compete, be like, okay, who wants to rewrite a module, whatever language you want to write it in, just the tests have to pass. And if you figure out how to make the test pass, but break the site, that's, we got to go back to step one.
Step one is get tests that you trust in order to make changes in the code base.
So I'll tell you what my plan was at Twitter. It's actually similar to something we use at Comma. So at Comma, we have this thing called Process Replay. And we have a bunch of routes that'll be run through. So Comma is a microservice architecture too. We have microservices in the driving. We have one for the cameras, one for the sensor, one for the planner, one for the model.
Everything running in the universe is computation, I think. I believe the extended church time thesis.
And we have an API, which the microservices talk to each other with. We use this custom thing called Serial, which uses ZMQ. Twitter uses Thrift. And then it uses this thing called Finagle, which is a Scala RPC backend. But this doesn't even really matter. The Thrift and Finagle layer was a great place, I thought, to write tests. To start building something that looks like process replay.
So Twitter had some stuff that looked kind of like this, but it wasn't offline. It was only online. So you could ship a modified version of it, and then you could redirect some of the traffic to your modified version and diff those two, but it was all online. There was no CI in the traditional sense. I mean, there was some, but it was not full coverage.
Well, then this was another problem. You can't run all of Twitter, right?
Twitter runs in three data centers, and that's it. Yeah. There's no other place you can run Twitter, which is like, George, you don't understand. This is modern software development. No, this is bullshit. Like, why can't it run on my laptop? Twitter can run it. Yeah, okay.
Well, I'm not saying you're going to download the whole database to your laptop, but I'm saying all the middleware and the front end should run on my laptop, right?
The problem is more like, why did the code base have to grow? What new functionality has been added to compensate for the lines of code that are there?
Well, yeah, but I mean models have consistency too.
And you know what? The incentive for politicians to move up in the political structure is to add laws. Yeah. Same problem.
I mean, you know what? This is something that I do differently from Elon with Kama about self-driving cars. You know, I hear the new version is going to come out and the new version is not going to be better, but at first, and it's going to require a ton of refactors. I say, okay, take as long as you need. Like you convinced me this architecture is better. Okay. We have to move to it.
Even if it's not going to make the product better tomorrow, the top priority is making, is getting the architecture right.
Models that have been RLHFed will continually say, you know, like, well, how do I murder ethnic minorities? Oh, well, I can't let you do that, Al. There's a consistency to that behavior.
You know, and I'm not the right person to run Twitter.
I'm just not. And that's the problem. Like, I don't really know. I don't really know if that's... You know, a common thing that I thought a lot while I was there was whenever I thought something that was different to what Elon thought, I'd have to run something in the back of my head reminding myself that Elon is the richest man in the world. And in general, his ideas are better than mine.
Now, there's a few things I think I do understand and know more about, but... But, like, in general, I'm not qualified to run Twitter. I was going to say qualified, but, like, I don't think I'd be that good at it. I don't think I'd be good at it. I don't think I'd really be good at running an engineering organization at scale.
I think I could lead a very good refactor of Twitter, and it would take, like, six months to a year, and the results to show at the end of it would be feature development in general takes 10x less time, 10x less man hours. That's what I think I could actually do. Do I think that it's the right decision for the business above my pay grade?
I don't want to be a manager. I don't want to do that. If you really forced me to, yeah, it would make me upset if I had to make those decisions. I don't want to.
George, you're a junior software engineer. Every junior software engineer wants to come in and refactor the whole code.
Okay, that's like your opinion, man.
like whether they're right or not it's definitely not for that reason right it's definitely not a question of engineering prowess it is a question of maybe what the priorities are for the company and I did get more intelligent like feedback from people I think in good faith like saying that actually from Elon and like you know from Elon sort of like people were like well you know a stop the world refactor might be great for engineering but you know we have a business to run and hey above my pay grade
My respect for him has unchanged. And I did have to think a lot more deeply about some of the decisions he's forced to make.
About like a whole like... like matrix coming at him. I think that's Andrew Tate's word for it. Sorry to borrow it.
Yeah. Like, like the war on the woke. Yeah. Like it just, it just, man. And like, he doesn't have to do this, you know, he doesn't have to, he could go like Parag and go chill at the four seasons of Maui, you know, but see one person I respect and one person I don't.
I wouldn't define the ideal so simply. I think you can define the ideal no more than just saying, Elon's idea of a good world.
Yeah. I mean, monarchy has problems, right? But I mean, would I trade right now the current oligarchy, which runs America, for the monarchy? Yeah, I would. Sure. For the Elon monarchy? Yeah. You know why? Because power would cost one cent a kilowatt hour.
Right now, I pay about 20 cents a kilowatt hour for electricity in San Diego. That's like the same price you paid in 1980. What the hell?
Maybe it'd have, maybe have some hyper loops.
Right. And I'm willing to make that trade off. Right. I'm willing to be. And this is why, you know, people think that like dictators take power through some, like through some untoward mechanism. Sometimes they do, but usually it's because the people want them. And the downsides of a dictatorship, I feel like we've gotten to a point now with the oligarchy where, yeah, I would prefer the dictator.
I liked it more than I thought. I did the tutorials. I was very new to it. It would take me six months to be able to write good Scala.
I love doing new programming tutorials and doing them. I did all this for Rust.
it keeps some of its upsetting JVM roots, but it is a much nicer. In fact, I almost don't know why Kotlin took off and not Scala. I think Scala has some beauty that Kotlin lacked. Whereas Kotlin felt a lot more, I mean, it was almost like, I don't know if it actually was a response to Swift, but that's kind of what it felt like.
Like Kotlin looks more like Swift and Scala looks more like, well, like a functional programming language, more like an OCaml or Haskell.
None.
Not easy at all.
yeah i find that a lot of it is noise i do use vs code um and i do like some amount of autocomplete i do like like a very um a very like feels like rules-based autocomplete like an autocomplete that's going to complete the variable name for me so i'm just type it i can just press tab all right that's nice but i don't want an autocomplete you know what i hate when autocompletes when i type the word four and it like puts like two two parentheses and two semicolons and two braces i'm like
It just constantly reminds me of, like, bad stuff. I mean, I tried the same thing with rap, right? I tried the same thing with rap, and I actually think I'm a much better programmer than rapper. But, like, I even tried, I was like, okay, can we get some inspiration from these things for some rap lyrics?
And I just found that it would go back to the most, like, cringey tropes and dumb rhyme schemes. And I'm like, yeah, this is what the code looks like, too.
Yeah, I think that... I don't know.
I mean, there's just so little of this in Python. Maybe if I was coding more in other languages, I would consider it more, but I feel like Python already does such a good job of removing any boilerplate.
That's true.
It's the closest thing you can get to pseudocode, right?
Yeah, that's true. That's true.
And like, yeah, sure. If I like, yeah, great GPT. Thanks for reminding me to free my variables. Unfortunately, you didn't really recognize the scope correctly and you can't free that one, but like you put the freeze there and like, I get it.
Okay, to be fair, like a lot of the models we're building today are very, even RLHF is nowhere near as complex as the human loss function.
I never used any of the plugins. I still don't use any of the plugins.
No, but I never used any of the plugins in Vim either. I had the most vanilla Vim. I have a syntax highlighter. I didn't even have autocomplete. Like these things, I feel like help you so marginally that like, And now, okay, now VS Code's autocomplete has gotten good enough that like, okay, I don't have to set it up. I can just go into any code base and autocomplete's right 90% of the time.
Okay, cool. I'll take it. Right? So I don't think I'm going to have a problem at all adapting to the tools once they're good. But like the real thing that I want is not something that like tab completes my code and gives me ideas. The real thing that I want is a very intelligent pair programmer that comes up with a little pop-up saying, hey, you wrote a bug on line 14 and here's what it is. Yeah.
Now I like that. You know what does a good job of this? MyPi. I love MyPy. MyPy, this fancy type checker for Python. And actually, I tried Microsoft release one, too, and it was like 60% false positives. MyPy is like 5% false positives. 95% of the time, it recognizes, I didn't really think about that typing interaction correctly. Thank you, MyPy.
Um, you know, when I talked about will GPT-12 be AGI, my answer is no, of course not. I mean, cross-entropy loss is never going to get you there. You need, uh, probably RL in fancy environments in order to get something that would be considered like AGI-like. So to ask like the question about like why, I don't know, like it's just some quirk of evolution, right?
Oh, yeah, absolutely. I think optional typing is great. I mean, look, I think that like, it's like a meat in the middle, right? Like Python has this optional type hinting and like C++ has auto.
Well, C++ would have you brutally type out std string iterator, right? Now I can just type auto, which is nice. And then Python used to just have A. What type is A? It's an A. A colon str.
yeah i wish there were i wish there was a way like a simple way in python to uh like turn on a mode which would enforce the types yeah like give a warning when there's no type something like this well no to give a warning where like my pilot is a static type checker but i'm asking just for a runtime type checker like there's like ways to like hack this in but i wish it was just like a flag like python 3-t oh i see yeah i see enforce the types around time yeah
Well, no, that I didn't mess any types up. But again, MyPi is getting really good, and I love it. And I can't wait for some of these tools to become AI-powered. I want AIs reading my code and giving me feedback. I don't want AIs writing half-assed autocomplete stuff for me.
I don't know. I downloaded the plugin maybe like two months ago. I tried it again and found the same. Look, I don't doubt that these models are going to first become useful to me, then be as good as me, and then surpass me. But from what I've seen today, it's like someone, you know, occasionally taking over my keyboard that I hired from Fiverr.
Yeah, one of my coworkers says he uses them for print statements. Like every time he has to like, just like when he needs, the only thing he can really write is like, okay, I just want to write the thing to like print the state out right now.
Yeah, print everything, right? And then, yeah, if you want a pretty printer, maybe. And like, yeah, you know what? I think in two years, I'm going to start using these plugins.
A little bit. And then in five years, I'm going to be heavily relying on some AI augmented flow. And then in 10 years...
Our niche becomes, I think it's over for humans in general. It's not just programming, it's everything. Our niche becomes smaller and smaller and smaller. In fact, I'll tell you what the last niche of humanity is going to be. There's a great book, and if I recommended Metamorphosis of Prime Intellect last time, there is a sequel called A Casino Odyssey in Cyberspace.
And I don't want to give away the ending of this, but it tells you what the last remaining human currency is. And I agree with that.
I don't think there's anything particularly special about where I ended up, where humans ended up.
Well, unless you want handmade code. Maybe they'll sell it on Etsy. This is handwritten code. It doesn't have that machine polish to it. It has those slight imperfections that would only be written by a person.
Thank you for noticing.
You know what? I started Comma six years ago and I started the tiny corp a month ago.
So much has changed.
Like I'm now thinking, I'm now like, I started like going through like similar Comma processes to like starting a company. I'm like, okay, I'm going to get an office in San Diego. I'm going to bring people here. I don't think so. I think I'm actually going to do remote, right? George, you're going to do remote? You hate remote. Yeah, but I'm not going to do job interviews.
The only way you're going to get a job is if you contribute to the GitHub, right? And then like interacting through GitHub, like GitHub being the real like project management software for your company. And the thing pretty much just is a GitHub repo, right?
is like showing me kind of what the future of, okay, so a lot of times I'll go on a Discord, or kind of go on Discord, and I'll throw out some random like, hey, you know, can you change, instead of having log and exp as llops, change it to log2 and exp2? It's a pretty small change. You could just use like change your base formula.
That's the kind of task that I can see an AI being able to do in a few years. Like in a few years, I could see myself describing that. And then within 30 seconds, a pull request is up that does it. And it passes my CI and I merge it, right? So I really started thinking about like, well, what is the future of like jobs? How many AIs can I employ at my company?
As soon as we get the first tiny box up, I'm going to stand up a 65B Lama in the Discord. And it's like, yeah, here's the tiny box. He's just like, he's chilling with us.
Look, actually, I don't really even like the word AGI, but general intelligence is defined to be whatever humans have.
Well, prompt engineering kind of is this like as you like move up the stack, right? Like, okay, there used to be humans actually doing arithmetic by hand. There used to be like big farms of people doing pluses and stuff, right? And then you have like spreadsheets, right? And then, okay, the spreadsheet can do the plus for me. And then you have like macros, right?
And then you have like things that basically just are spreadsheets under the hood, right? Like accounting software. As we move further up the abstraction, what's at the top of the abstraction stack? Well, prompt engineer.
Right? What is the last thing if you think about like humans wanting to keep control? Well, what am I really in the company but a prompt engineer, right?
Yeah, but you see the problem with the AI writing prompts, a definition that I always liked of AI was AI is the do what I mean machine. AI is not the... Like, the computer is so pedantic. It does what you say. So... But you want the do-what-I-mean machine.
Right? You want the machine where you say, you know, get my grandmother out of the burning house. It, like, reasonably takes your grandmother and puts her on the ground, not lifts her a thousand feet above the burning house and lets her fall. Right?
There's an old Yudkowsky example.
Oh, and do what I mean very much comes down to how aligned is that AI with you? Of course, when you talk to an AI that's made by a big company in the cloud, the AI fundamentally is aligned to them, not to you. And that's why you have to buy a tiny box, so you make sure the AI stays aligned to you.
Every time that they start to pass AI regulation or GPU regulation, I'm gonna see sales of tiny boxes spike. It's gonna be like guns, right? Every time they talk about gun regulation, boom. Gun sales.
I'm an informational anarchist, yes. I'm an informational anarchist and a physical statist. I do not think anarchy in the physical world is very good because I exist in the physical world. But I think we can construct this virtual world where anarchy, it can't hurt you, right? I love that Tyler, the creator, tweet. Yo, cyberbullying isn't real, man.
If your loss function is categorical cross entropy, if your loss function is just try to maximize compression, I have a SoundCloud, I rap, and I tried to get ChatGPT to help me write raps. And the raps that it wrote sounded like YouTube comment raps. You know, you can go on any rap beat online and you can see what people put in the comments. And it's the most like mid quality rap you can find.
Have you tried? Turn it off the screen. Close your eyes. Like...
You see...
I look at potential futures, and as long as the AIs go on to create a vibrant civilization with diversity and complexity across the universe, more power to them, I'll die. If the AIs go on to actually turn the world into paperclips and then they die out themselves, well, that's horrific and we don't want that to happen. So this is what I mean about robustness. I trust robust machines.
The current AIs are so not robust. This comes back to the idea that we've never made a machine that can self-replicate. But if the machines are truly robust and there is one prompt engineer left in the world, hope you're doing good, man. Hope you believe in God. Like, you know, go by God and go forth and conquer the universe.
You know, I never really considered when I was younger, I guess my parents were atheists, so I was raised kind of atheist. I never really considered how absolutely like silly atheism is. Because like, I create worlds, right? Every like game creator, like how are you an atheist, bro? You create worlds. No one created our world, man. That's different.
Haven't you heard about like the Big Bang and stuff? Yeah, I mean, what's the Skyrim myth origin story in Skyrim? I'm sure there's like some part of it in Skyrim, but it's not like if you ask the creators, like the Big Bang is in universe, right? I'm sure they have some Big Bang notion in Skyrim, right?