Menu
Sign In Search Podcasts Charts People & Topics Add Podcast API Pricing
Podcast Image

AI: post transformers

LongCodeZip: Compress Long Code Context for LLMs

08 Oct 2025

Description

The October 2025 paper introduces **LongCodeZip**, a novel, training-free, and model-agnostic framework designed for **compressing long code contexts** to improve the efficiency and capability of Code Large Language Models (LLMs). The core problem addressed is that long code contexts lead to high API costs, increased latency, and model difficulty in identifying relevant information due to the structured nature of code. LongCodeZip utilizes a two-stage hierarchical approach: **coarse-grained compression** selects the most relevant functions based on conditional perplexity (approximated mutual information), followed by **fine-grained compression** that further prunes code within these functions into semantically coherent blocks using perplexity-based chunking and a knapsack optimization to maximize information density. Evaluations across code completion, summarization, and question answering tasks demonstrate that LongCodeZip achieves up to a **5.6x compression ratio** while consistently outperforming existing compression and retrieval-augmented generation (RAG) baselines, even when utilizing a smaller compression model.Source:https://arxiv.org/pdf/2510.00446

Audio
Featured in this Episode

No persons identified in this episode.

Transcription

This episode hasn't been transcribed yet

Help us prioritize this episode for transcription by upvoting it.

0 upvotes
🗳️ Sign in to Upvote

Popular episodes get transcribed faster

Comments

There are no comments yet.

Please log in to write the first comment.