This June 2023 paper introduces FlexGen, a novel high-throughput generation engine designed to overcome the substantial computational and memory demands of large language model (LLM) inference on limited hardware, specifically a single commodity GPU. It details FlexGen's ability to aggregate memory and computation across the GPU, CPU, and disk, employing an optimized scheduling approach and a linear programming-based policy search to store and access tensors efficiently. Furthermore, FlexGen incorporates 4-bit compression for model weights and attention caches, which significantly reduces memory footprint with minimal accuracy loss. The research demonstrates FlexGen's superior performance, achieving substantially higher throughput compared to existing offloading systems, even enabling the operation of models as large as OPT-175B on a single 16GB GPU.Source:https://arxiv.org/pdf/2303.06865
No persons identified in this episode.
This episode hasn't been transcribed yet
Help us prioritize this episode for transcription by upvoting it.
Popular episodes get transcribed faster
Other recent transcribed episodes
Transcribed and ready to explore now
Eric Larsen on the emergence and potential of AI in healthcare
10 Dec 2025
McKinsey on Healthcare
Reducing Burnout and Boosting Revenue in ASCs
10 Dec 2025
Becker’s Healthcare -- Spine and Orthopedic Podcast
Dr. Erich G. Anderer, Chief of the Division of Neurosurgery and Surgical Director of Perioperative Services at NYU Langone Hospital–Brooklyn
09 Dec 2025
Becker’s Healthcare -- Spine and Orthopedic Podcast
Dr. Nolan Wessell, Assistant Professor and Well-being Co-Director, Department of Orthopedic Surgery, Division of Spine Surgery, University of Colorado School of Medicine
08 Dec 2025
Becker’s Healthcare -- Spine and Orthopedic Podcast
NPR News: 12-08-2025 2AM EST
08 Dec 2025
NPR News Now
NPR News: 12-08-2025 1AM EST
08 Dec 2025
NPR News Now