我们都知道AI越来越强大,但你有没有想过,我们该如何让它跑得更快、更稳,甚至更“多才多艺”?本期节目,我们将一起探索几篇最新论文,看看科学家们是如何给AI的训练过程装上一个更稳健的导航系统,并揭开AI绘画高手背后“民间偏方”的科学原理。我们还会聊到,如何像培养一个“通才”一样,让一个AI同时学会两百件事。最后,我们将见证两种神奇的“魔法”:如何在没有数据的情况下给大模型高效“瘦身”,以及如何对一个黑箱模型进行精准的“微创手术”。00:00:41 如何给AI装上一个更聪明的“导航系统”00:05:19 AI绘画高手,背后藏着什么训练秘诀?00:11:06 AI通才养成记:如何让一个机器学会200件事?00:17:12 AI模型“瘦身”,如何做到无米之炊?00:25:14 给AI模型做微创手术,需要几步?本期介绍的几篇论文:[LG] ROOT: Robust Orthogonalized Optimizer for Neural Network Training [Huawei Noah’s Ark Lab] https://arxiv.org/abs/2511.20626 ---[LG] Demystifying Diffusion Objectives: Reweighted Losses are Better Variational Bounds [Google DeepMind] https://arxiv.org/abs/2511.19664 ---[LG] Learning Massively Multitask World Models for Continuous Control [University of California San Diego] https://arxiv.org/abs/2511.19584 ---[LG] CafeQ: Calibration-free Quantization via Learned Transformations and Adaptive Rounding [Google] https://arxiv.org/abs/2511.19705 ---[LG] ModHiFi: Identifying High Fidelity predictive components for Model Modification [CSA, IISc & HP Inc. AI Lab & Google] https://arxiv.org/abs/2511.19566
No persons identified in this episode.
This episode hasn't been transcribed yet
Help us prioritize this episode for transcription by upvoting it.
Popular episodes get transcribed faster
Other recent transcribed episodes
Transcribed and ready to explore now
SpaceX Said to Pursue 2026 IPO
10 Dec 2025
Bloomberg Tech
Don’t Call It a Comeback
10 Dec 2025
Motley Fool Money
Japan Claims AGI, Pentagon Adopts Gemini, and MIT Designs New Medicines
10 Dec 2025
The Daily AI Show
Eric Larsen on the emergence and potential of AI in healthcare
10 Dec 2025
McKinsey on Healthcare
What it will take for AI to scale (energy, compute, talent)
10 Dec 2025
Azeem Azhar's Exponential View
Reducing Burnout and Boosting Revenue in ASCs
10 Dec 2025
Becker’s Healthcare -- Spine and Orthopedic Podcast