Menu
Sign In Search Podcasts Charts People & Topics Add Podcast API Pricing
Podcast Image

AI: post transformers

Offloading LLM Models and KV Caches to NVMe SSDs

08 Sep 2025

Description

This March 2025 paper examines the input/output (I/O) characteristics of offloading large language model (LLM) components to NVMe SSDs during inference, a critical solution for overcoming GPU memory limitations with ever-growing LLMs. Researchers analyzed block-layer I/O traces from two prominent LLM frameworks, DeepSpeed and FlexGen, to understand how model weights and key-value (KV) caches are handled. The findings indicate that asynchronous I/O using libaio significantly outperforms POSIX for tensor transfers, although neither method fully saturates the NVMe SSD's theoretical bandwidth. For model offloading, I/O is predominantly characterized by 128KiB reads, primarily occurring at the beginning of the inference process, while KV cache offloading involves both reads and writes of similar size, with read bandwidth being substantially higher. Ultimately, the research suggests that modern NVMe SSDs are capable of supporting current LLM inference workloads but highlights opportunities for further optimization in SSD design and KV cache management.Source:https://dl.acm.org/doi/10.1145/3719330.3721230

Audio
Featured in this Episode

No persons identified in this episode.

Transcription

This episode hasn't been transcribed yet

Help us prioritize this episode for transcription by upvoting it.

0 upvotes
🗳️ Sign in to Upvote

Popular episodes get transcribed faster

Comments

There are no comments yet.

Please log in to write the first comment.