The TWIML AI Podcast (formerly This Week in Machine Learning & Artificial Intelligence)
Quantizing Transformers by Helping Attention Heads Do Nothing with Markus Nagel - #663
26 Dec 2023
Today we’re joined by Markus Nagel, research scientist at Qualcomm AI Research, who helps us kick off our coverage of NeurIPS 2023. In our conversation with Markus, we cover his accepted papers at the conference, along with other work presented by Qualcomm AI Research scientists. Markus’ first paper, Quantizable Transformers: Removing Outliers by Helping Attention Heads Do Nothing, focuses on tackling activation quantization issues introduced by the attention mechanism and how to solve them. We also discuss Pruning vs Quantization: Which is Better?, which focuses on comparing the effectiveness of these two methods in achieving model weight compression. Additional papers discussed focus on topics like using scalarization in multitask and multidomain learning to improve training and inference, using diffusion models for a sequence of state models and actions, applying geometric algebra with equivariance to transformers, and applying a deductive verification of chain of thought reasoning performed by LLMs. The complete show notes for this episode can be found at twimlai.com/go/663.
No persons identified in this episode.
This episode hasn't been transcribed yet
Help us prioritize this episode for transcription by upvoting it.
Popular episodes get transcribed faster
Other recent transcribed episodes
Transcribed and ready to explore now
NPR News: 12-08-2025 2AM EST
08 Dec 2025
NPR News Now
NPR News: 12-07-2025 11PM EST
08 Dec 2025
NPR News Now
NPR News: 12-07-2025 10PM EST
08 Dec 2025
NPR News Now
Meidas Health: AAP President Strongly Pushes Back on Hepatitis B Vaccine Changes
08 Dec 2025
The MeidasTouch Podcast
Democrat Bobby Cole Discusses Race for Texas Governor
07 Dec 2025
The MeidasTouch Podcast
Fox News Crashes Out on Air Over Trump’s Rapid Fall
07 Dec 2025
The MeidasTouch Podcast