Menu
Sign In Search Podcasts Charts People & Topics Add Podcast API Pricing
Podcast Image

AI: post transformers

Regression Language Models for Code Metrics

03 Oct 2025

Description

This September 30 2025 academic paper, introduces Regression Language Models (RLMs) as a unified method for code-to-metric regression, which is the task of predicting numerical outcomes from source code or computation graphs. This approach simplifies traditional methods by directly using text input—such as high-level programming languages like Haskell and Python or low-level ONNX graph representations—to predict metrics like accuracy, memory consumption, and execution latency. The RLM, initialized from a pretrained T5Gemma encoder, is shown to perform competitively against specialized models like Graph Neural Networks (GNNs) across various tasks, including predicting performance in Neural Architecture Search (NAS) and estimating memory usage in competitive programming. The findings highlight the RLM's versatility and ability to model multiple objectives concurrently, suggesting a shift toward generic, text-based regression in computational graph analysis.Source:https://arxiv.org/pdf/2509.26476

Audio
Featured in this Episode

No persons identified in this episode.

Transcription

This episode hasn't been transcribed yet

Help us prioritize this episode for transcription by upvoting it.

0 upvotes
🗳️ Sign in to Upvote

Popular episodes get transcribed faster

Comments

There are no comments yet.

Please log in to write the first comment.