Menu
Sign In Search Podcasts Charts People & Topics Add Podcast API Pricing
Podcast Image

AI Podcast

大规模Transformer模型推理的效率优化

20 Jan 2025

Description

本播客深入探讨了如何高效地部署大型Transformer模型进行生成式推理,特别是在延迟敏感和长序列长度的场景下。我们将讨论模型并行策略、内存优化和低级优化技术,这些技术共同实现了在延迟和模型FLOPS利用率方面的新的帕累托前沿。

Audio
Featured in this Episode

No persons identified in this episode.

Transcription

This episode hasn't been transcribed yet

Help us prioritize this episode for transcription by upvoting it.

0 upvotes
🗳️ Sign in to Upvote

Popular episodes get transcribed faster

Comments

There are no comments yet.

Please log in to write the first comment.