The TWIML AI Podcast (formerly This Week in Machine Learning & Artificial Intelligence)
Speculative Decoding and Efficient LLM Inference with Chris Lott - #717
04 Feb 2025
Today, we're joined by Chris Lott, senior director of engineering at Qualcomm AI Research to discuss accelerating large language model inference. We explore the challenges presented by the LLM encoding and decoding (aka generation) and how these interact with various hardware constraints such as FLOPS, memory footprint and memory bandwidth to limit key inference metrics such as time-to-first-token, tokens per second, and tokens per joule. We then dig into a variety of techniques that can be used to accelerate inference such as KV compression, quantization, pruning, speculative decoding, and leveraging small language models (SLMs). We also discuss future directions for enabling on-device agentic experiences such as parallel generation and software tools like Qualcomm AI Orchestrator. The complete show notes for this episode can be found at https://twimlai.com/go/717.
No persons identified in this episode.
This episode hasn't been transcribed yet
Help us prioritize this episode for transcription by upvoting it.
Popular episodes get transcribed faster
Other recent transcribed episodes
Transcribed and ready to explore now
NPR News: 12-08-2025 2AM EST
08 Dec 2025
NPR News Now
NPR News: 12-07-2025 11PM EST
08 Dec 2025
NPR News Now
NPR News: 12-07-2025 10PM EST
08 Dec 2025
NPR News Now
Meidas Health: AAP President Strongly Pushes Back on Hepatitis B Vaccine Changes
08 Dec 2025
The MeidasTouch Podcast
Democrat Bobby Cole Discusses Race for Texas Governor
07 Dec 2025
The MeidasTouch Podcast
Fox News Crashes Out on Air Over Trump’s Rapid Fall
07 Dec 2025
The MeidasTouch Podcast