Reid Hoffman
๐ค SpeakerAppearances Over Time
Podcast Appearances
One of the things we learned from COVID is one of the challenges of globalization tends to be the, oh, we resolve to one point of manufacture, but then you're brittle. So when Italy went down with COVID, all of a sudden, the specific medical manufacturer that was oriented there suddenly broke for the rest of the world. So you want...
several different places of source of origin for resilience and robustness against possible adversity. So ideally, in a good world, you'd have two to four or five places of semiconductor manufacture, and you would have at least one or two of those within the Western nations. And if that was the case, then we'd be good. So a single point of dependency on Taiwan has a bunch of global risk.
several different places of source of origin for resilience and robustness against possible adversity. So ideally, in a good world, you'd have two to four or five places of semiconductor manufacture, and you would have at least one or two of those within the Western nations. And if that was the case, then we'd be good. So a single point of dependency on Taiwan has a bunch of global risk.
several different places of source of origin for resilience and robustness against possible adversity. So ideally, in a good world, you'd have two to four or five places of semiconductor manufacture, and you would have at least one or two of those within the Western nations. And if that was the case, then we'd be good. So a single point of dependency on Taiwan has a bunch of global risk.
So unsurprising to you, because you and I have talked a bunch, both. So both NVIDIA's business will continue to go very strongly because they have an amazing lead in amazing chips and there is massive demand for their chips for doing compute. And many players, Google, Amazon, others, including AMD, are building out chips as well. So you both have them developing.
So unsurprising to you, because you and I have talked a bunch, both. So both NVIDIA's business will continue to go very strongly because they have an amazing lead in amazing chips and there is massive demand for their chips for doing compute. And many players, Google, Amazon, others, including AMD, are building out chips as well. So you both have them developing.
So unsurprising to you, because you and I have talked a bunch, both. So both NVIDIA's business will continue to go very strongly because they have an amazing lead in amazing chips and there is massive demand for their chips for doing compute. And many players, Google, Amazon, others, including AMD, are building out chips as well. So you both have them developing.
what is still a very valuable leading resource, and a bunch of other chips developing as well. We have kind of like electricity. We have infinite demand for compute at moderate pricing. So as long as we can deliver that, demand will completely fill.
what is still a very valuable leading resource, and a bunch of other chips developing as well. We have kind of like electricity. We have infinite demand for compute at moderate pricing. So as long as we can deliver that, demand will completely fill.
what is still a very valuable leading resource, and a bunch of other chips developing as well. We have kind of like electricity. We have infinite demand for compute at moderate pricing. So as long as we can deliver that, demand will completely fill.
By the way, incorrect, but yes. We're not reaching the upper end of LLM. It's kind of like, look, the press cycles like to go, aha, we haven't seen anything in the last six months. We're at the upper end. It's like, oh, if it were, we haven't seen anything. By the way, we did see GPT-01, right? If we hadn't seen anything in the last couple of months, that doesn't mean it's the end of the cycle.
By the way, incorrect, but yes. We're not reaching the upper end of LLM. It's kind of like, look, the press cycles like to go, aha, we haven't seen anything in the last six months. We're at the upper end. It's like, oh, if it were, we haven't seen anything. By the way, we did see GPT-01, right? If we hadn't seen anything in the last couple of months, that doesn't mean it's the end of the cycle.
By the way, incorrect, but yes. We're not reaching the upper end of LLM. It's kind of like, look, the press cycles like to go, aha, we haven't seen anything in the last six months. We're at the upper end. It's like, oh, if it were, we haven't seen anything. By the way, we did see GPT-01, right? If we hadn't seen anything in the last couple of months, that doesn't mean it's the end of the cycle.
That means we're still getting to what the next set of stunning things. I think the next larger computer, the next large LLM that's trained with a larger computer will still have new magic in it. And all of the people who argue against it are to some degree arguing their own book. Well, I don't have the compute. So the next level of compute won't make a difference. I don't have the data.
That means we're still getting to what the next set of stunning things. I think the next larger computer, the next large LLM that's trained with a larger computer will still have new magic in it. And all of the people who argue against it are to some degree arguing their own book. Well, I don't have the compute. So the next level of compute won't make a difference. I don't have the data.
That means we're still getting to what the next set of stunning things. I think the next larger computer, the next large LLM that's trained with a larger computer will still have new magic in it. And all of the people who argue against it are to some degree arguing their own book. Well, I don't have the compute. So the next level of compute won't make a difference. I don't have the data.
So the next level of data won't make the next difference. Or the current data is all we have. And so we won't be able to do it. It's like, well, actually, in fact, we can create synthetic data. And there's a ton of data that's out there that's not part of the standard internet training corpus. The scale game is still playing.
So the next level of data won't make the next difference. Or the current data is all we have. And so we won't be able to do it. It's like, well, actually, in fact, we can create synthetic data. And there's a ton of data that's out there that's not part of the standard internet training corpus. The scale game is still playing.
So the next level of data won't make the next difference. Or the current data is all we have. And so we won't be able to do it. It's like, well, actually, in fact, we can create synthetic data. And there's a ton of data that's out there that's not part of the standard internet training corpus. The scale game is still playing.
Well, by the way, he's right that, look, we're going to have a ton of agents. And I think the agents will be composed of multiple models. But to count out the next level of scale models as being important in creating a number of quality agents is just incorrect.