Menu
Sign In Search Podcasts Charts People & Topics Add Podcast API Pricing
Podcast Image

AI: post transformers

INF2: Near-Storage LLM Inference for High Throughput

10 Sep 2025

Description

This February 2025 paper introduces INF2, a novel framework designed to enhance the generative inference throughput of large language models (LLMs) by utilizing computational storage devices (CSDs). The core innovation, attention-near storage (ANS), offloads memory-intensive self-attention operations directly to accelerators within these storage devices, significantly reducing data transfer bottlenecks over the system interconnect. To further boost performance, INF2 incorporates delayed KV cache writeback which minimizes storage write latency by batching updates to the KV cache, and cooperative X-cache, which optimizes host memory usage by storing input activations instead of key-value caches for cooperative processing between the GPU and CSDs. Through these methods, INF2 demonstrates substantial throughput improvements, achieving up to 3.46 times faster performance compared to existing state-of-the-art baselines in real-world evaluations.Source: https://arxiv.org/html/2502.09921v1

Audio
Featured in this Episode

No persons identified in this episode.

Transcription

This episode hasn't been transcribed yet

Help us prioritize this episode for transcription by upvoting it.

0 upvotes
🗳️ Sign in to Upvote

Popular episodes get transcribed faster

Comments

There are no comments yet.

Please log in to write the first comment.