Menu
Sign In Search Podcasts Charts People & Topics Add Podcast API Pricing
Podcast Image

AI: post transformers

LoFT: Parameter-Efficient Fine-Tuning for Long-tailed Semi-Supervised Learning

17 Sep 2025

Description

This September 2025 paper introduces LoFT, a novel framework designed to improve Long-Tailed Semi-Supervised Learning (LTSSL) by leveraging parameter-efficient fine-tuning of pre-trained foundation models. The core idea is to enhance confidence calibration and generate more reliable pseudo-labels, which are crucial for addressing the imbalance inherent in long-tailed datasets. Furthermore, the paper extends this approach to open-world scenarios with LoFT-OW, specifically incorporating mechanisms to detect and filter out-of-distribution (OOD) samples from unlabeled data. The authors demonstrate that these fine-tuned models achieve superior performance on various benchmarks, even when utilizing significantly less unlabeled data compared to previous methods.Source:https://arxiv.org/pdf/2509.09926

Audio
Featured in this Episode

No persons identified in this episode.

Transcription

This episode hasn't been transcribed yet

Help us prioritize this episode for transcription by upvoting it.

0 upvotes
🗳️ Sign in to Upvote

Popular episodes get transcribed faster

Comments

There are no comments yet.

Please log in to write the first comment.