Jonathan Ross
👤 PersonAppearances Over Time
Podcast Appearances
It's about duplicating data with high fidelity and distributing it. Telephone does. It's what internet does. It's what the printing press did. They're all the same technology, just much different scale and speed and capability. Generative AI is different. It's about coming up with something contextual, creative, unique in the moment. And so the LLM is just the printing press of the generative age.
It's about duplicating data with high fidelity and distributing it. Telephone does. It's what internet does. It's what the printing press did. They're all the same technology, just much different scale and speed and capability. Generative AI is different. It's about coming up with something contextual, creative, unique in the moment. And so the LLM is just the printing press of the generative age.
It's about duplicating data with high fidelity and distributing it. Telephone does. It's what internet does. It's what the printing press did. They're all the same technology, just much different scale and speed and capability. Generative AI is different. It's about coming up with something contextual, creative, unique in the moment. And so the LLM is just the printing press of the generative age.
It's the start of it. And then there's going to be all these other stages. Just imagine trying to start Uber. when we didn't have mobile yet. Great. I'm going to book a trip over to here. How do I get home? You can't carry a desktop with you, right? So you need to be at the right stage.
It's the start of it. And then there's going to be all these other stages. Just imagine trying to start Uber. when we didn't have mobile yet. Great. I'm going to book a trip over to here. How do I get home? You can't carry a desktop with you, right? So you need to be at the right stage.
It's the start of it. And then there's going to be all these other stages. Just imagine trying to start Uber. when we didn't have mobile yet. Great. I'm going to book a trip over to here. How do I get home? You can't carry a desktop with you, right? So you need to be at the right stage.
So when I look at perplexity, I look at perplexity as being perfectly positioned for the moment that the hallucination or really confabulation rate comes down. The moment that these models get good enough where you don't have to check the citations anymore, That's going to open up a whole set of industries. All of a sudden, you'll be able to do medical diagnoses from LLMs.
So when I look at perplexity, I look at perplexity as being perfectly positioned for the moment that the hallucination or really confabulation rate comes down. The moment that these models get good enough where you don't have to check the citations anymore, That's going to open up a whole set of industries. All of a sudden, you'll be able to do medical diagnoses from LLMs.
So when I look at perplexity, I look at perplexity as being perfectly positioned for the moment that the hallucination or really confabulation rate comes down. The moment that these models get good enough where you don't have to check the citations anymore, That's going to open up a whole set of industries. All of a sudden, you'll be able to do medical diagnoses from LLMs.
You'll be able to do legal work from LLMs. Until then, it's like trying to create Uber before we had smartphones. It just doesn't make any sense. However, people are willing to use perplexity today. even though you have to check the citations. So they have an actual business that gets to continue. So like they're getting to sort of ride the wave.
You'll be able to do legal work from LLMs. Until then, it's like trying to create Uber before we had smartphones. It just doesn't make any sense. However, people are willing to use perplexity today. even though you have to check the citations. So they have an actual business that gets to continue. So like they're getting to sort of ride the wave.
You'll be able to do legal work from LLMs. Until then, it's like trying to create Uber before we had smartphones. It just doesn't make any sense. However, people are willing to use perplexity today. even though you have to check the citations. So they have an actual business that gets to continue. So like they're getting to sort of ride the wave.
And the moment that that tsunami of lack of confabulation or hallucination comes along, they're perfectly positioned. Each company has to find their own thing. And I would look at Suno as a great example of how things are being done around the product as opposed to just the models.
And the moment that that tsunami of lack of confabulation or hallucination comes along, they're perfectly positioned. Each company has to find their own thing. And I would look at Suno as a great example of how things are being done around the product as opposed to just the models.
And the moment that that tsunami of lack of confabulation or hallucination comes along, they're perfectly positioned. Each company has to find their own thing. And I would look at Suno as a great example of how things are being done around the product as opposed to just the models.
Disruption happens. If you're not able to pivot now, you're not going to be able to pivot later when you get disrupted anyway.
Disruption happens. If you're not able to pivot now, you're not going to be able to pivot later when you get disrupted anyway.
Disruption happens. If you're not able to pivot now, you're not going to be able to pivot later when you get disrupted anyway.
How do you think about that? What you see is a bunch of people who are concerned about training and the need for it. And everyone's still thinking that most of compute is training. And that there's going to be less of it because someone trained a model on 2000 GPUs and the nerfed A800 version with slower memory or whatever it is. And they're like, oh, people aren't going to need as many chips.
How do you think about that? What you see is a bunch of people who are concerned about training and the need for it. And everyone's still thinking that most of compute is training. And that there's going to be less of it because someone trained a model on 2000 GPUs and the nerfed A800 version with slower memory or whatever it is. And they're like, oh, people aren't going to need as many chips.