Menu
Sign In Search Podcasts Charts People & Topics Add Podcast API Pricing

Casey Liss

👤 Person
4566 total appearances

Appearances Over Time

Podcast Appearances

Accidental Tech Podcast
624: Do Less Math in Computers

This is bananas to Americans. And obviously in America, you know, you could write into a contract, you know, I'm taking the stove or I'm taking this or I'm taking that. But generally speaking, normally kitchen fixtures, well, the fixtures particularly like cabinets and countertops and whatnot, those always stay. I've never heard of those going.

Accidental Tech Podcast
624: Do Less Math in Computers

This is bananas to Americans. And obviously in America, you know, you could write into a contract, you know, I'm taking the stove or I'm taking this or I'm taking that. But generally speaking, normally kitchen fixtures, well, the fixtures particularly like cabinets and countertops and whatnot, those always stay. I've never heard of those going.

Accidental Tech Podcast
624: Do Less Math in Computers

But even like stoves and ovens and in a lot of cases, microwaves, if they're, you know, built-ins, all of those tend to stay. Yeah.

Accidental Tech Podcast
624: Do Less Math in Computers

But even like stoves and ovens and in a lot of cases, microwaves, if they're, you know, built-ins, all of those tend to stay. Yeah.

Accidental Tech Podcast
624: Do Less Math in Computers

All right. So there's been a big brouhaha over the last several days about DeepSeek. And we're going to read, probably I'll be reading quite a lot of different things for better and for worse. But DeepSeek is a new AI thing from this Chinese company that I don't think any, well, not literally, of course, but most Americans hadn't heard of. I certainly hadn't heard of it.

Accidental Tech Podcast
624: Do Less Math in Computers

All right. So there's been a big brouhaha over the last several days about DeepSeek. And we're going to read, probably I'll be reading quite a lot of different things for better and for worse. But DeepSeek is a new AI thing from this Chinese company that I don't think any, well, not literally, of course, but most Americans hadn't heard of. I certainly hadn't heard of it.

Accidental Tech Podcast
624: Do Less Math in Computers

And they released some stuff, and we'll talk about what here in a second. And it done shooketh the American stock markets and a lot of big tech here in America. And a lot of big tech took a bath in the stock market over the last week. So reading from Ars Technica.

Accidental Tech Podcast
624: Do Less Math in Computers

And they released some stuff, and we'll talk about what here in a second. And it done shooketh the American stock markets and a lot of big tech here in America. And a lot of big tech took a bath in the stock market over the last week. So reading from Ars Technica.

Accidental Tech Podcast
624: Do Less Math in Computers

On Monday, NVIDIA stock lost 17% amid worries over the rise of the Chinese AI company DeepSeek, whose R1 reasoning model stunned industry observers last week by challenging American AI supremacy with a low-cost, freely available AI model, and whose AI Assistant app jumped to the top of the iPhone App Store's free apps category over the weekend, overtaking chat GPT.

Accidental Tech Podcast
624: Do Less Math in Computers

On Monday, NVIDIA stock lost 17% amid worries over the rise of the Chinese AI company DeepSeek, whose R1 reasoning model stunned industry observers last week by challenging American AI supremacy with a low-cost, freely available AI model, and whose AI Assistant app jumped to the top of the iPhone App Store's free apps category over the weekend, overtaking chat GPT.

Accidental Tech Podcast
624: Do Less Math in Computers

The drama started around January 20th when the Chinese AI startup DeepSeek announced R1, a new simulated reasoning, or SR, model that it claimed could match OpenAI's O1 reasoning benchmarks. There are three elements of the DeepSeek R1 that really shocked experts.

Accidental Tech Podcast
624: Do Less Math in Computers

The drama started around January 20th when the Chinese AI startup DeepSeek announced R1, a new simulated reasoning, or SR, model that it claimed could match OpenAI's O1 reasoning benchmarks. There are three elements of the DeepSeek R1 that really shocked experts.

Accidental Tech Podcast
624: Do Less Math in Computers

First, the Chinese startup appears to have trained the model for only about $6 million – that's American – reportedly about 3% of the cost of training O1. And as a so-called, quote-unquote, side project, while using less powerful NVIDIA H800 AI acceleration chips due to the U.S. export restrictions on cutting-edge GPUs.

Accidental Tech Podcast
624: Do Less Math in Computers

First, the Chinese startup appears to have trained the model for only about $6 million – that's American – reportedly about 3% of the cost of training O1. And as a so-called, quote-unquote, side project, while using less powerful NVIDIA H800 AI acceleration chips due to the U.S. export restrictions on cutting-edge GPUs.

Accidental Tech Podcast
624: Do Less Math in Computers

Second, it appeared just four months after OpenAI announced O1 in September of 24. And finally, and perhaps most importantly, DeepSeek released the model weights for free with an open MIT license, meaning anyone can download it, run it, and fine-tune or modify it.

Accidental Tech Podcast
624: Do Less Math in Computers

Second, it appeared just four months after OpenAI announced O1 in September of 24. And finally, and perhaps most importantly, DeepSeek released the model weights for free with an open MIT license, meaning anyone can download it, run it, and fine-tune or modify it.

Accidental Tech Podcast
624: Do Less Math in Computers

Hi, I'm back. All right. So Ben Thompson, front of the show, did, I believe this is a non-paywalled post, which he called Deep Seek FAQ. And honestly, it's worth reading. I could sit here and read the whole damn thing, but it would take a while. So I'm just going to read the snippets that John has thankfully curated for us. There is a lot here. So gentlemen, please interrupt when you're ready.

Accidental Tech Podcast
624: Do Less Math in Computers

Hi, I'm back. All right. So Ben Thompson, front of the show, did, I believe this is a non-paywalled post, which he called Deep Seek FAQ. And honestly, it's worth reading. I could sit here and read the whole damn thing, but it would take a while. So I'm just going to read the snippets that John has thankfully curated for us. There is a lot here. So gentlemen, please interrupt when you're ready.

Accidental Tech Podcast
624: Do Less Math in Computers

The DeepSeek V2 model introduced two important breakthroughs, DeepSeek MOE and DeepSeek MLA. The MOE and DeepSeek MOE refers to mixture of experts. Some models, like GPT-3.5, activate the entire model during both training and inference. It turns out, however, that not every part of the model is necessary for the topic at hand.

Accidental Tech Podcast
624: Do Less Math in Computers

The DeepSeek V2 model introduced two important breakthroughs, DeepSeek MOE and DeepSeek MLA. The MOE and DeepSeek MOE refers to mixture of experts. Some models, like GPT-3.5, activate the entire model during both training and inference. It turns out, however, that not every part of the model is necessary for the topic at hand.