Menu
Sign In Search Podcasts Charts People & Topics Add Podcast API Pricing
Podcast Image

AI Coach - Anil Nathoo

98 - Foundations of Large Language Models ( Tong Xiao and Jingbo Zhu)

02 Sep 2025

Description

Click here to read more.This podcast is based on the paper "Foundations of Large Language Models" by Tong Xiao and Jingbo Zhu.It offers a comprehensive exploration of Large Language Models (LLMs), beginning with an examination of pre-training methods in Natural Language Processing, including both supervised and self-supervised approaches like masked language modeling, and using models like BERT. It then transitions to a detailed discussion of LLMs, covering their architecture, training challenges, and the critical concept of alignment with human preferences through techniques like Supervised Fine-tuning (SFT) and Reinforcement Learning from Human Feedback (RLHF). A significant portion of the podcast focuses on LLM inference, explaining fundamental algorithms such as prefilling and decoding, and various methods for improving efficiency and scalability, including prompt engineering and advanced search strategies. The podcast also touches on crucial considerations like bias in training data, privacy concerns, and the emergent abilities and scaling laws that govern LLM performance.

Audio
Featured in this Episode

No persons identified in this episode.

Transcription

This episode hasn't been transcribed yet

Help us prioritize this episode for transcription by upvoting it.

0 upvotes
🗳️ Sign in to Upvote

Popular episodes get transcribed faster

Comments

There are no comments yet.

Please log in to write the first comment.