Menu
Sign In Search Podcasts Charts People & Topics Add Podcast API Pricing
Podcast Image

AI: post transformers

NeurIPS 2025: MoBA: Mixture of Block Attention for Long-Context LLMs

29 Nov 2025

Description

This paper introduces Mixture of Block Attention (MoBA) to address the prohibitive quadratic computational overhead inherent in traditional attention mechanisms when scaling large language models (LLMs) for long contexts. MoBA is a novel architecture that strategically applies the established Mixture of Experts (MoE) paradigm directly to the attention mechanism itself. Instead of attending to the entire sequence, MoBA partitions the context into discrete blocks and utilizes a dynamic gating network to selectively route queries to only the most relevant blocks of keys and values. This block-sparse approach drastically increases computational efficiency, achieving sub-quadratic complexity and demonstrating speedups of up to 16 times when processing sequences up to 10 million tokens. Crucially, the research demonstrates that MoBA maintains performance comparable to full attention across scaling laws and real-world benchmarks. Furthermore, the architecture is highly flexible, allowing for seamless transitions between sparse MoBA and full attention layers during both training and inference.Source: https://openreview.net/pdf?id=RlqYCpTu1P

Audio
Featured in this Episode

No persons identified in this episode.

Transcription

This episode hasn't been transcribed yet

Help us prioritize this episode for transcription by upvoting it.

0 upvotes
🗳️ Sign in to Upvote

Popular episodes get transcribed faster

Comments

There are no comments yet.

Please log in to write the first comment.