本期播客深入探讨了名为CCQ(卷积码量化)的突破性技术。面对大型语言模型(LLMs)日益增长的部署成本和障碍,CCQ提出了一种创新的极低比特量化方案。我们将讨论CCQ如何通过结合卷积码、混合编码和码簇等技术,在几乎不损失模型精度的前提下,将模型压缩至2.0到2.75比特。同时,我们也会探讨其独特的免查找表和位移解码设计如何解决了传统矢量量化的推理速度瓶颈,并实现了在单个GPU上部署超大型模型(如文心4.5)的壮举。欢迎收听,了解这项可能改变大模型部署格局的黑科技。
No persons identified in this episode.
This episode hasn't been transcribed yet
Help us prioritize this episode for transcription by upvoting it.
Popular episodes get transcribed faster
Other recent transcribed episodes
Transcribed and ready to explore now
Eric Larsen on the emergence and potential of AI in healthcare
10 Dec 2025
McKinsey on Healthcare
Reducing Burnout and Boosting Revenue in ASCs
10 Dec 2025
Becker’s Healthcare -- Spine and Orthopedic Podcast
Dr. Erich G. Anderer, Chief of the Division of Neurosurgery and Surgical Director of Perioperative Services at NYU Langone Hospital–Brooklyn
09 Dec 2025
Becker’s Healthcare -- Spine and Orthopedic Podcast
Dr. Nolan Wessell, Assistant Professor and Well-being Co-Director, Department of Orthopedic Surgery, Division of Spine Surgery, University of Colorado School of Medicine
08 Dec 2025
Becker’s Healthcare -- Spine and Orthopedic Podcast
NPR News: 12-08-2025 2AM EST
08 Dec 2025
NPR News Now
NPR News: 12-08-2025 1AM EST
08 Dec 2025
NPR News Now