Menu
Sign In Search Podcasts Charts People & Topics Add Podcast API Pricing
Podcast Image

AI: post transformers

Federated Post-Training LLMs: An Accessibility and Efficiency Survey

16 Sep 2025

Description

This August 2025 paper examines the evolving landscape of Federated Large Language Models (FedLLM), focusing on how large language models are post-trained while preserving user data privacy. The authors introduce a novel taxonomy that categorizes FedLLM approaches based on model accessibility (white-box, gray-box, and black-box) and parameter efficiency. It highlights various techniques within these categories, such as adapter-based tuning and prompt tuning, which reduce computational and communication overhead. The paper also discusses the growing importance of inference-only black-box settings for future FedLLM development and identifies open challenges like federated value alignment and enhanced security in constrained environments.Source:https://arxiv.org/html/2508.16261v1

Audio
Featured in this Episode

No persons identified in this episode.

Transcription

This episode hasn't been transcribed yet

Help us prioritize this episode for transcription by upvoting it.

0 upvotes
🗳️ Sign in to Upvote

Popular episodes get transcribed faster

Comments

There are no comments yet.

Please log in to write the first comment.