Menu
Sign In Search Podcasts Charts People & Topics Add Podcast API Blog Pricing

Mark Blyth

๐Ÿ‘ค Speaker
435 total appearances

Appearances Over Time

Podcast Appearances

The Twenty Minute VC (20VC): Venture Capital | Startup Funding | The Pitch
20VC: AI Scaling Myths: More Compute is not the Answer | The Core Bottlenecks in AI Today: Data, Algorithms and Compute | The Future of Models: Open vs Closed, Small vs Large with Arvind Narayanan, Professor of Computer Science @ Princeton

So we have smaller models, and they're effective, as we said, because of cost, and they're popular because of cost. What does that do to the requirements in terms of compute?

The Twenty Minute VC (20VC): Venture Capital | Startup Funding | The Pitch
20VC: AI Scaling Myths: More Compute is not the Answer | The Core Bottlenecks in AI Today: Data, Algorithms and Compute | The Future of Models: Open vs Closed, Small vs Large with Arvind Narayanan, Professor of Computer Science @ Princeton

So we have smaller models, and they're effective, as we said, because of cost, and they're popular because of cost. What does that do to the requirements in terms of compute?

The Twenty Minute VC (20VC): Venture Capital | Startup Funding | The Pitch
20VC: AI Scaling Myths: More Compute is not the Answer | The Core Bottlenecks in AI Today: Data, Algorithms and Compute | The Future of Models: Open vs Closed, Small vs Large with Arvind Narayanan, Professor of Computer Science @ Princeton

So we have smaller models, and they're effective, as we said, because of cost, and they're popular because of cost. What does that do to the requirements in terms of compute?

The Twenty Minute VC (20VC): Venture Capital | Startup Funding | The Pitch
20VC: AI Scaling Myths: More Compute is not the Answer | The Core Bottlenecks in AI Today: Data, Algorithms and Compute | The Future of Models: Open vs Closed, Small vs Large with Arvind Narayanan, Professor of Computer Science @ Princeton

When we think about the alignment in compute and models, we had David Kahn from Sequoia on the show and he said that you would never train a frontier model on the same data center twice. Meaning that essentially there is now a misalignment in the development speed of models and that is much faster than the development speed of new hardware and compute. How do you think about that?

The Twenty Minute VC (20VC): Venture Capital | Startup Funding | The Pitch
20VC: AI Scaling Myths: More Compute is not the Answer | The Core Bottlenecks in AI Today: Data, Algorithms and Compute | The Future of Models: Open vs Closed, Small vs Large with Arvind Narayanan, Professor of Computer Science @ Princeton

When we think about the alignment in compute and models, we had David Kahn from Sequoia on the show and he said that you would never train a frontier model on the same data center twice. Meaning that essentially there is now a misalignment in the development speed of models and that is much faster than the development speed of new hardware and compute. How do you think about that?

The Twenty Minute VC (20VC): Venture Capital | Startup Funding | The Pitch
20VC: AI Scaling Myths: More Compute is not the Answer | The Core Bottlenecks in AI Today: Data, Algorithms and Compute | The Future of Models: Open vs Closed, Small vs Large with Arvind Narayanan, Professor of Computer Science @ Princeton

When we think about the alignment in compute and models, we had David Kahn from Sequoia on the show and he said that you would never train a frontier model on the same data center twice. Meaning that essentially there is now a misalignment in the development speed of models and that is much faster than the development speed of new hardware and compute. How do you think about that?

The Twenty Minute VC (20VC): Venture Capital | Startup Funding | The Pitch
20VC: AI Scaling Myths: More Compute is not the Answer | The Core Bottlenecks in AI Today: Data, Algorithms and Compute | The Future of Models: Open vs Closed, Small vs Large with Arvind Narayanan, Professor of Computer Science @ Princeton

So we are releasing new models so fast that computers are unable to keep up with them. And as a result, you won't want to train your new model on old H100 hardware that is 18 months old. You need continuously the newest hardware for every single new frontier model.

The Twenty Minute VC (20VC): Venture Capital | Startup Funding | The Pitch
20VC: AI Scaling Myths: More Compute is not the Answer | The Core Bottlenecks in AI Today: Data, Algorithms and Compute | The Future of Models: Open vs Closed, Small vs Large with Arvind Narayanan, Professor of Computer Science @ Princeton

So we are releasing new models so fast that computers are unable to keep up with them. And as a result, you won't want to train your new model on old H100 hardware that is 18 months old. You need continuously the newest hardware for every single new frontier model.

The Twenty Minute VC (20VC): Venture Capital | Startup Funding | The Pitch
20VC: AI Scaling Myths: More Compute is not the Answer | The Core Bottlenecks in AI Today: Data, Algorithms and Compute | The Future of Models: Open vs Closed, Small vs Large with Arvind Narayanan, Professor of Computer Science @ Princeton

So we are releasing new models so fast that computers are unable to keep up with them. And as a result, you won't want to train your new model on old H100 hardware that is 18 months old. You need continuously the newest hardware for every single new frontier model.

The Twenty Minute VC (20VC): Venture Capital | Startup Funding | The Pitch
20VC: AI Scaling Myths: More Compute is not the Answer | The Core Bottlenecks in AI Today: Data, Algorithms and Compute | The Future of Models: Open vs Closed, Small vs Large with Arvind Narayanan, Professor of Computer Science @ Princeton

Speaking of that commoditization, the thing that I'm interested by there is kind of the benchmarking or the determination that they are suddenly commoditized or kind of equal performance. You said before LLM evaluation is a minefield. Help me understand why is LLM evaluation a minefield?

The Twenty Minute VC (20VC): Venture Capital | Startup Funding | The Pitch
20VC: AI Scaling Myths: More Compute is not the Answer | The Core Bottlenecks in AI Today: Data, Algorithms and Compute | The Future of Models: Open vs Closed, Small vs Large with Arvind Narayanan, Professor of Computer Science @ Princeton

Speaking of that commoditization, the thing that I'm interested by there is kind of the benchmarking or the determination that they are suddenly commoditized or kind of equal performance. You said before LLM evaluation is a minefield. Help me understand why is LLM evaluation a minefield?

The Twenty Minute VC (20VC): Venture Capital | Startup Funding | The Pitch
20VC: AI Scaling Myths: More Compute is not the Answer | The Core Bottlenecks in AI Today: Data, Algorithms and Compute | The Future of Models: Open vs Closed, Small vs Large with Arvind Narayanan, Professor of Computer Science @ Princeton

Speaking of that commoditization, the thing that I'm interested by there is kind of the benchmarking or the determination that they are suddenly commoditized or kind of equal performance. You said before LLM evaluation is a minefield. Help me understand why is LLM evaluation a minefield?

The Twenty Minute VC (20VC): Venture Capital | Startup Funding | The Pitch
20VC: AI Scaling Myths: More Compute is not the Answer | The Core Bottlenecks in AI Today: Data, Algorithms and Compute | The Future of Models: Open vs Closed, Small vs Large with Arvind Narayanan, Professor of Computer Science @ Princeton

We mentioned that, you know, some of the early use cases in terms of passing the bar, some real kind of wild applications in terms of how models are applied. I do just want to kind of move a layer deeper to the companies building the products and the leaders leading those companies. You've got Zach and Demis who are saying that AGI is further out than we think.

The Twenty Minute VC (20VC): Venture Capital | Startup Funding | The Pitch
20VC: AI Scaling Myths: More Compute is not the Answer | The Core Bottlenecks in AI Today: Data, Algorithms and Compute | The Future of Models: Open vs Closed, Small vs Large with Arvind Narayanan, Professor of Computer Science @ Princeton

We mentioned that, you know, some of the early use cases in terms of passing the bar, some real kind of wild applications in terms of how models are applied. I do just want to kind of move a layer deeper to the companies building the products and the leaders leading those companies. You've got Zach and Demis who are saying that AGI is further out than we think.

The Twenty Minute VC (20VC): Venture Capital | Startup Funding | The Pitch
20VC: AI Scaling Myths: More Compute is not the Answer | The Core Bottlenecks in AI Today: Data, Algorithms and Compute | The Future of Models: Open vs Closed, Small vs Large with Arvind Narayanan, Professor of Computer Science @ Princeton

We mentioned that, you know, some of the early use cases in terms of passing the bar, some real kind of wild applications in terms of how models are applied. I do just want to kind of move a layer deeper to the companies building the products and the leaders leading those companies. You've got Zach and Demis who are saying that AGI is further out than we think.

The Twenty Minute VC (20VC): Venture Capital | Startup Funding | The Pitch
20VC: AI Scaling Myths: More Compute is not the Answer | The Core Bottlenecks in AI Today: Data, Algorithms and Compute | The Future of Models: Open vs Closed, Small vs Large with Arvind Narayanan, Professor of Computer Science @ Princeton

And then you have Sam Altman and you have Dario and Elon in some cases saying it's sooner than we think. What are your reflections and analysis on company leader predictions on AGI?

The Twenty Minute VC (20VC): Venture Capital | Startup Funding | The Pitch
20VC: AI Scaling Myths: More Compute is not the Answer | The Core Bottlenecks in AI Today: Data, Algorithms and Compute | The Future of Models: Open vs Closed, Small vs Large with Arvind Narayanan, Professor of Computer Science @ Princeton

And then you have Sam Altman and you have Dario and Elon in some cases saying it's sooner than we think. What are your reflections and analysis on company leader predictions on AGI?

The Twenty Minute VC (20VC): Venture Capital | Startup Funding | The Pitch
20VC: AI Scaling Myths: More Compute is not the Answer | The Core Bottlenecks in AI Today: Data, Algorithms and Compute | The Future of Models: Open vs Closed, Small vs Large with Arvind Narayanan, Professor of Computer Science @ Princeton

And then you have Sam Altman and you have Dario and Elon in some cases saying it's sooner than we think. What are your reflections and analysis on company leader predictions on AGI?

The Twenty Minute VC (20VC): Venture Capital | Startup Funding | The Pitch
20VC: AI Scaling Myths: More Compute is not the Answer | The Core Bottlenecks in AI Today: Data, Algorithms and Compute | The Future of Models: Open vs Closed, Small vs Large with Arvind Narayanan, Professor of Computer Science @ Princeton

Is it possible to have a dual strategy of chasing AGI and superintelligence, as OpenAI very clearly are, and creating valuable products at the same time that can be used in everyday use? Or is that balance actually mutually exclusive?

The Twenty Minute VC (20VC): Venture Capital | Startup Funding | The Pitch
20VC: AI Scaling Myths: More Compute is not the Answer | The Core Bottlenecks in AI Today: Data, Algorithms and Compute | The Future of Models: Open vs Closed, Small vs Large with Arvind Narayanan, Professor of Computer Science @ Princeton

Is it possible to have a dual strategy of chasing AGI and superintelligence, as OpenAI very clearly are, and creating valuable products at the same time that can be used in everyday use? Or is that balance actually mutually exclusive?