Menu
Sign In Search Podcasts Charts People & Topics Add Podcast API Pricing
Podcast Image

AI: post transformers

Scaling Reinforcement Learning Compute for LLMs

17 Oct 2025

Description

This October 15, 2025 collaboration between Meta, UT Austin, UCL, UC Berkeley, Harvard University, and Periodic Labs details a systematic study on scaling compute for reinforcement learning (RL) in large language models (LLMs), aiming to bring predictability to the RL training phase. The authors introduce a principled framework that uses a sigmoidal curve to model the relationship between compute (GPU Hours) and performance (pass rate), enabling the prediction of asymptotic performance ($A$) and compute efficiency ($B$). Through extensive ablations, the research identifies ScaleRL, a robust recipe that combines best practices in asynchronous training, loss functions (CISPO), and precision fixes, demonstrating its superior scalability and stability up to 100,000 GPU-hours. Figures illustrate the predictable scaling curves for ScaleRL compared to prevalent RL methods, showing how factors like batch size, generation length, and model size influence both efficiency and the final performance ceiling.Source:https://arxiv.org/pdf/2510.13786

Audio
Featured in this Episode

No persons identified in this episode.

Transcription

This episode hasn't been transcribed yet

Help us prioritize this episode for transcription by upvoting it.

0 upvotes
🗳️ Sign in to Upvote

Popular episodes get transcribed faster

Comments

There are no comments yet.

Please log in to write the first comment.