Menu
Sign In Search Podcasts Charts People & Topics Add Podcast API Pricing
Podcast Image

AI: post transformers

Llama 3: Architecture, Capabilities, and Safety

14 Sep 2025

Description

On this November 2025 paper the Meta Llama Team's paper introduces Llama 3, a new family of large language models featuring 8B, 70B, and 405B parameters, designed with native multilingual support, coding, reasoning, and tool usage capabilities. The development emphasizes data quality and diversity, employing extensive filtering, de-duplication, and heuristic cleaning processes for both English and multilingual data, alongside scaling laws to optimize model size and training budgets. The models utilize a standard dense Transformer architecture with minor adaptations like grouped query attention and an attention mask for multi-document sequences, demonstrating comparable performance to leading models such as GPT-4 across various benchmarks. Furthermore, the research explores integrating multimodal capabilities—image, video, and speech—through compositional approaches involving specialized encoders and adapters, which are trained through multi-stage pre-training and fine-tuning. A significant focus is also placed on safety and responsible development, incorporating comprehensive data cleaning, iterative safety finetuning with reward models and DPO, and robust red teaming efforts to address risks like insecure coding and prompt injection, while publicly releasing Llama Guard 3 as a system-level safety classifier.Source:https://arxiv.org/pdf/2407.21783

Audio
Featured in this Episode

No persons identified in this episode.

Transcription

This episode hasn't been transcribed yet

Help us prioritize this episode for transcription by upvoting it.

0 upvotes
🗳️ Sign in to Upvote

Popular episodes get transcribed faster

Comments

There are no comments yet.

Please log in to write the first comment.