你有没有想过,AI不仅在学习知识,也在学习如何学习、如何忘记,甚至如何拥有自己独特的“笔迹”?本期节目,我们将看到一个“阅表无数”的AI如何秒解难题,并揭开神经网络训练中那如同“强迫症”般的神秘秩序是如何形成的。我们还会探索一个反常识的发现:为什么让AI学到“顿悟”,反而能让它忘得更快更准?以及AI如何学会“断舍离”,主动过滤记忆来提升自己。最后,我们聊聊如何给开源模型刻上无法抹去的“隐形签名”。准备好了吗?让我们一起潜入AI思想的深水区。00:00:42 你的表格数据,需要一个“见过世面”的AI00:05:56 AI训练中的神秘秩序:一把解开“神经网络坍塌”之谜的钥匙00:11:18 想让机器忘得快,先得让它学到“呆”?00:16:17 AI的“断舍离”:为什么聪明人要学会忘记?00:21:49 AI的“隐形墨水”:如何给开源模型刻上无法抹去的签名?本期介绍的几篇论文:[LG] Accurate predictions on small data with a tabular foundation model[University of Freiburg]https://www.nature.com/articles/s41586-024-08328-6.pdf---[LG] Diagonalizing the Softmax: Hadamard Initialization for Tractable Cross-Entropy Dynamics[University of Oxford & University of British Columbia]https://arxiv.org/abs/2512.04006---[LG] Grokked Models are Better Unlearners[Cardiff University]https://arxiv.org/abs/2512.03437---[LG] Cache What Lasts: Token Retention for Memory-Bounded KV Cache in LLMs[JPMorganChase AI Research & Yale University]https://arxiv.org/abs/2512.03324---[LG] MarkTune: Improving the Quality-Detectability Trade-off in Open-Weight LLM Watermarking[University of Pennsylvania & CMU & Columbia University]https://arxiv.org/abs/2512.04044
No persons identified in this episode.
This episode hasn't been transcribed yet
Help us prioritize this episode for transcription by upvoting it.
Popular episodes get transcribed faster
Other recent transcribed episodes
Transcribed and ready to explore now
SpaceX Said to Pursue 2026 IPO
10 Dec 2025
Bloomberg Tech
Don’t Call It a Comeback
10 Dec 2025
Motley Fool Money
Japan Claims AGI, Pentagon Adopts Gemini, and MIT Designs New Medicines
10 Dec 2025
The Daily AI Show
Eric Larsen on the emergence and potential of AI in healthcare
10 Dec 2025
McKinsey on Healthcare
What it will take for AI to scale (energy, compute, talent)
10 Dec 2025
Azeem Azhar's Exponential View
Reducing Burnout and Boosting Revenue in ASCs
10 Dec 2025
Becker’s Healthcare -- Spine and Orthopedic Podcast