Menu
Sign In Search Podcasts Charts People & Topics Add Podcast API Pricing

Nicole Perlroth

👤 Speaker
1380 total appearances

Appearances Over Time

Podcast Appearances

lawmakers snuck in a clause that explicitly bans state or local governments from regulating AI on critical systems, like our elections, for 10 years. We don't even know what offensive AI is going to look like a year from now, let alone a decade. And we're tying our own hands behind our back. And that, that is truly terrifying. Because AI is still very much an infant.

lawmakers snuck in a clause that explicitly bans state or local governments from regulating AI on critical systems, like our elections, for 10 years. We don't even know what offensive AI is going to look like a year from now, let alone a decade. And we're tying our own hands behind our back. And that, that is truly terrifying. Because AI is still very much an infant.

And like a child's earliest years, these first stages are formative. We have a critical but narrow window to get this right. But that window closes a little faster every day. AI is already outpacing Moore's Law. We're in the midst of a full-blown paradigm shift. The question now is, will we repeat the mistakes of our past, or will we do what is necessary to get this right?

And like a child's earliest years, these first stages are formative. We have a critical but narrow window to get this right. But that window closes a little faster every day. AI is already outpacing Moore's Law. We're in the midst of a full-blown paradigm shift. The question now is, will we repeat the mistakes of our past, or will we do what is necessary to get this right?

The emergence last January of a little-known Chinese AI startup called DeepSeek may be an early stress test.

The emergence last January of a little-known Chinese AI startup called DeepSeek may be an early stress test.

When DeepSeek first dropped its AI model last January, it landed like an earthquake, not just for what it did, but for how it did it. DeepSeek was able to accomplish much of what OpenAI and Google and Anthropic and Meta could do with their AI models at a fraction of the cost and computing power. And then came the kicker.

When DeepSeek first dropped its AI model last January, it landed like an earthquake, not just for what it did, but for how it did it. DeepSeek was able to accomplish much of what OpenAI and Google and Anthropic and Meta could do with their AI models at a fraction of the cost and computing power. And then came the kicker.

DeepSeek released its model as quote-unquote open source, and those quotation marks are very much intended. Here's Igor Yablokov, an AI pioneer who sold the technology to Amazon that would later form the basis of Alexa, and more recently serves as the founder and CEO of Kryon. What does it mean that DeepSeek is quote-unquote open source?

DeepSeek released its model as quote-unquote open source, and those quotation marks are very much intended. Here's Igor Yablokov, an AI pioneer who sold the technology to Amazon that would later form the basis of Alexa, and more recently serves as the founder and CEO of Kryon. What does it mean that DeepSeek is quote-unquote open source?

The distinction between open source and what Igor refers to as open weight is a critical one. With a truly open source approach like Wikipedia, you can click in and interrogate where all the information you're reading came from, down to who wrote the words and when. You can see which sources they reference. You can investigate those sources. You can check the work, edit, and make improvements.

The distinction between open source and what Igor refers to as open weight is a critical one. With a truly open source approach like Wikipedia, you can click in and interrogate where all the information you're reading came from, down to who wrote the words and when. You can see which sources they reference. You can investigate those sources. You can check the work, edit, and make improvements.

DeepSeek is not actually open source in that sense. DeepSeek is open weight. The pre-trained model weights are available for download and use, but the actual training data, the training code, are still a black box. You can't replicate it. You can only build on top of it.

DeepSeek is not actually open source in that sense. DeepSeek is open weight. The pre-trained model weights are available for download and use, but the actual training data, the training code, are still a black box. You can't replicate it. You can only build on top of it.

Sticking with the Wikipedia analogy, it'd be like going to a page and reading the content, but the footnotes and author sections are blacked out. You can add to it, you can build on it, but you can't check the work.

Sticking with the Wikipedia analogy, it'd be like going to a page and reading the content, but the footnotes and author sections are blacked out. You can add to it, you can build on it, but you can't check the work.

So you can build on top of it, but you can't completely understand what's inside. And you can use it at a tiny fraction of the cost of OpenAI's GPT. And we're talking cost savings of 96%. In some sense, it's Huawei in a different form.

So you can build on top of it, but you can't completely understand what's inside. And you can use it at a tiny fraction of the cost of OpenAI's GPT. And we're talking cost savings of 96%. In some sense, it's Huawei in a different form.

It's pricing and efficiency all but guarantee that, without some intervention, these cheaper Chinese AI models will become the de facto backbone of the next generation of technology. And that presents real risk. Now, how much risk depends on how you use it.

It's pricing and efficiency all but guarantee that, without some intervention, these cheaper Chinese AI models will become the de facto backbone of the next generation of technology. And that presents real risk. Now, how much risk depends on how you use it.