Menu
Sign In Search Podcasts Charts People & Topics Add Podcast API Pricing
Podcast Image

AI可可AI生活

[人人能懂] 从注意力校准、并行重组到精准剪枝

18 Dec 2025

Description

你有没有想过,AI在“读书”时也会注意力不集中,需要“临时抱佛脚”来校准焦点吗?或者,最顶尖的效率提升,竟然来自于一种叫“马其顿方阵”的精明“偷懒”?本期节目,我们将一口气解锁AI的几种新技能:看它如何从“逐字精雕”的苦工,变身为“成段挥毫”的艺术家;如何组建一个内部“专家委员会”,自己揪出数据里的“内奸”;以及,如何像外科手术一样,给自己来一场精准又高效的“减肥手术”。五篇最新论文,五种绝妙思路,让我们一起看看AI是如何学会更聪明地思考和工作的。00:00:42 AI“读书”也走神?一个让他临时抱佛脚的锦囊00:06:14 你的效率工具,是如何被“偷懒”的程序员设计出来的?00:12:25 AI“写稿”新姿势:从“逐字精雕”到“成段挥毫”00:19:15 高手过招:如何让AI自己揪出“内奸”?00:25:10 给大模型减肥,如何做到又快又好?本期介绍的几篇论文:[LG] Let's (not) just put things in Context: Test-Time Training for Long-Context LLMs  [Meta & Harvard University]  https://arxiv.org/abs/2512.13898 ---[LG] Sliding Window Recurrences for Sequence Models  [Université de Montréal & Stanford University]  https://arxiv.org/abs/2512.13921 ---[CL] Efficient-DLM: From Autoregressive to Diffusion Language Models, and Beyond in Speed  [NVIDIA & Georgia Tech]  https://arxiv.org/abs/2512.14067 ---[AI] Adjudicator: Correcting Noisy Labels with a KG-Informed Council of LLM Agents  [Google]  https://arxiv.org/abs/2512.13704 ---[LG] OPTIMA: Optimal One-shot Pruning for LLMs via Quadratic Programming Reconstruction  [University of Toronto & Google DeepMind]  https://arxiv.org/abs/2512.13886 

Audio
Featured in this Episode

No persons identified in this episode.

Transcription

This episode hasn't been transcribed yet

Help us prioritize this episode for transcription by upvoting it.

0 upvotes
🗳️ Sign in to Upvote

Popular episodes get transcribed faster

Comments

There are no comments yet.

Please log in to write the first comment.