你有没有想过,AI在“读书”时也会注意力不集中,需要“临时抱佛脚”来校准焦点吗?或者,最顶尖的效率提升,竟然来自于一种叫“马其顿方阵”的精明“偷懒”?本期节目,我们将一口气解锁AI的几种新技能:看它如何从“逐字精雕”的苦工,变身为“成段挥毫”的艺术家;如何组建一个内部“专家委员会”,自己揪出数据里的“内奸”;以及,如何像外科手术一样,给自己来一场精准又高效的“减肥手术”。五篇最新论文,五种绝妙思路,让我们一起看看AI是如何学会更聪明地思考和工作的。00:00:42 AI“读书”也走神?一个让他临时抱佛脚的锦囊00:06:14 你的效率工具,是如何被“偷懒”的程序员设计出来的?00:12:25 AI“写稿”新姿势:从“逐字精雕”到“成段挥毫”00:19:15 高手过招:如何让AI自己揪出“内奸”?00:25:10 给大模型减肥,如何做到又快又好?本期介绍的几篇论文:[LG] Let's (not) just put things in Context: Test-Time Training for Long-Context LLMs [Meta & Harvard University] https://arxiv.org/abs/2512.13898 ---[LG] Sliding Window Recurrences for Sequence Models [Université de Montréal & Stanford University] https://arxiv.org/abs/2512.13921 ---[CL] Efficient-DLM: From Autoregressive to Diffusion Language Models, and Beyond in Speed [NVIDIA & Georgia Tech] https://arxiv.org/abs/2512.14067 ---[AI] Adjudicator: Correcting Noisy Labels with a KG-Informed Council of LLM Agents [Google] https://arxiv.org/abs/2512.13704 ---[LG] OPTIMA: Optimal One-shot Pruning for LLMs via Quadratic Programming Reconstruction [University of Toronto & Google DeepMind] https://arxiv.org/abs/2512.13886
No persons identified in this episode.
This episode hasn't been transcribed yet
Help us prioritize this episode for transcription by upvoting it.
Popular episodes get transcribed faster
Other recent transcribed episodes
Transcribed and ready to explore now
#2427 - Bret Weinstein
17 Dec 2025
The Joe Rogan Experience
#2426 - Cameron Hanes & Adam Greentree
16 Dec 2025
The Joe Rogan Experience
#487 – Irving Finkel: Deciphering Secrets of Ancient Civilizations & Flood Myths
12 Dec 2025
Lex Fridman Podcast
#2425 - Ethan Hawke
11 Dec 2025
The Joe Rogan Experience
SpaceX Said to Pursue 2026 IPO
10 Dec 2025
Bloomberg Tech
Don’t Call It a Comeback
10 Dec 2025
Motley Fool Money