Menu
Sign In Search Podcasts Charts People & Topics Add Podcast API Pricing
Podcast Image

AI: post transformers

Eleuther: evaluating LLMs

07 Sep 2025

Description

These sources collectively explore various approaches to evaluating and improving Large Language Models (LLMs). Several papers introduce new benchmark datasets designed to test LLMs on complex reasoning tasks, such as the "BIG-Bench Hard (BBH)" suite, the graduate-level "GPQA" questions in science, and "MuSR" for multistep soft reasoning in natural language narratives. A key technique discussed across these sources is Chain-of-Thought (CoT) prompting, which encourages LLMs to show their step-by-step reasoning, leading to improved performance, often surpassing human-rater averages on challenging tasks. Additionally, the "Instruction-Following Eval (IFEval)" introduces a reproducible benchmark for verifiable instructions, allowing for objective assessment of an LLM's ability to follow explicit directives. The "MMLU-Pro Benchmark" further contributes a large-scale dataset across diverse disciplines to rigorously assess model capabilities, emphasizing the need for robust evaluation metrics and challenging data to push the boundaries of AI reasoning.Sources:https://github.com/EleutherAI/lm-evaluation-harnesshttps://github.com/EleutherAI/lm-evaluation-harness/blob/main/lm_eval/tasks/leaderboard/README.mdhttps://arxiv.org/pdf/2103.03874 - Measuring Mathematical Problem Solving With the MATH Datasethttps://arxiv.org/pdf/2210.09261 - Challenging BIG-Bench tasks and whether chain-of-thought can solve themhttps://arxiv.org/pdf/2310.16049 - MUSR: TESTING THE LIMITS OF CHAIN-OF-THOUGHT WITH MULTISTEP SOFT REASONINGhttps://arxiv.org/pdf/2311.07911 - Instruction-Following Evaluation for Large Language Modelshttps://arxiv.org/pdf/2311.12022 - GPQA: A Graduate-Level Google-Proof Q&A Benchmarkhttps://arxiv.org/pdf/2406.01574 - MMLU-Pro: A More Robust and Challenging Multi-Task Language Understanding Benchmark

Audio
Featured in this Episode

No persons identified in this episode.

Transcription

This episode hasn't been transcribed yet

Help us prioritize this episode for transcription by upvoting it.

0 upvotes
🗳️ Sign in to Upvote

Popular episodes get transcribed faster

Comments

There are no comments yet.

Please log in to write the first comment.