Menu
Sign In Search Podcasts Charts People & Topics Add Podcast API Pricing
Podcast Image

AI: post transformers

AMD: Instella: Fully Open Language Models with Stellar Performance

16 Nov 2025

Description

The November 13, 2025 paper by AMD introducs **Instella**, a new family of **fully open-source** three-billion-parameter large language models (LLMs) developed by AMD and powered by their Instinct MI300X GPUs. The central focus is on advancing transparency and reproducibility in LLMs by releasing not only the model weights but also the **complete training pipeline, datasets, and optimization details**. Instella achieves **state-of-the-art performance** among fully open models of its size, remaining competitive with leading open-weight counterparts despite using fewer pre-training tokens. The family includes specialized variants: **Instella-Long**, which supports a 128K token context length, and **Instella-Math**, a reasoning-centric model enhanced through specialized supervised fine-tuning and reinforcement learning. The document details the two-stage pre-training, post-training, and specific methods used to create the Long and Math versions, demonstrating that **openness does not compromise performance**.Source:https://arxiv.org/pdf/2511.10628

Audio
Featured in this Episode

No persons identified in this episode.

Transcription

This episode hasn't been transcribed yet

Help us prioritize this episode for transcription by upvoting it.

0 upvotes
🗳️ Sign in to Upvote

Popular episodes get transcribed faster

Comments

There are no comments yet.

Please log in to write the first comment.