Menu
Sign In Search Podcasts Charts People & Topics Add Podcast API Pricing
Podcast Image

AI可可AI生活

[人人能懂] 黑箱里的秩序,记忆中的断舍离

04 Dec 2025

Description

你有没有想过,AI不仅在学习知识,也在学习如何学习、如何忘记,甚至如何拥有自己独特的“笔迹”?本期节目,我们将看到一个“阅表无数”的AI如何秒解难题,并揭开神经网络训练中那如同“强迫症”般的神秘秩序是如何形成的。我们还会探索一个反常识的发现:为什么让AI学到“顿悟”,反而能让它忘得更快更准?以及AI如何学会“断舍离”,主动过滤记忆来提升自己。最后,我们聊聊如何给开源模型刻上无法抹去的“隐形签名”。准备好了吗?让我们一起潜入AI思想的深水区。00:00:42 你的表格数据,需要一个“见过世面”的AI00:05:56 AI训练中的神秘秩序:一把解开“神经网络坍塌”之谜的钥匙00:11:18 想让机器忘得快,先得让它学到“呆”?00:16:17 AI的“断舍离”:为什么聪明人要学会忘记?00:21:49 AI的“隐形墨水”:如何给开源模型刻上无法抹去的签名?本期介绍的几篇论文:[LG] Accurate predictions on small data with a tabular foundation model[University of Freiburg]https://www.nature.com/articles/s41586-024-08328-6.pdf---[LG] Diagonalizing the Softmax: Hadamard Initialization for Tractable Cross-Entropy Dynamics[University of Oxford & University of British Columbia]https://arxiv.org/abs/2512.04006---[LG] Grokked Models are Better Unlearners[Cardiff University]https://arxiv.org/abs/2512.03437---[LG] Cache What Lasts: Token Retention for Memory-Bounded KV Cache in LLMs[JPMorganChase AI Research & Yale University]https://arxiv.org/abs/2512.03324---[LG] MarkTune: Improving the Quality-Detectability Trade-off in Open-Weight LLM Watermarking[University of Pennsylvania & CMU & Columbia University]https://arxiv.org/abs/2512.04044

Audio
Featured in this Episode

No persons identified in this episode.

Transcription

This episode hasn't been transcribed yet

Help us prioritize this episode for transcription by upvoting it.

0 upvotes
🗳️ Sign in to Upvote

Popular episodes get transcribed faster

Comments

There are no comments yet.

Please log in to write the first comment.