Menu
Sign In Search Podcasts Charts People & Topics Add Podcast API Pricing
Podcast Image

Austrian Artificial Intelligence Podcast

57. Eldar Kurtic - Efficient Inference through sparsity and quantization - Part 2/2

25 Jun 2024

Description

Hello and welcome back to the AAIP This is the second part of my interview with Eldar Kurtic and his research on how to optimiz inference of deep neural networks. In the first part of the interview, we focused on sparsity and how high unstructured sparsity can be achieved without loosing model accuracy on CPU's and in part on GPU's. In this second part of the interview, we are going to focus on quantization. Quantization tries to reduce model size by finding ways to represent the model in numeric representations with less precision while retaining model performance. This means that a model that for example has been trained in a standard 32bit floating point representation is during post training quantization converted to a representation that is only using 8 bits. Reducing the model size to one forth. We will discuss how current quantization method can be applied to quantize model weights down to 4 bits while retaining most of the models performance and why doing so with the models activation is much more tricky. Eldar will explain how current GPU architectures, create two different type of bottlenecks. Memory bound and compute bound scenarios. Where in the case of memory bound situations, the model size causes most of the inference time to be spend in transferring model weights. Exactly in these situations, quantization has its biggest impact and reducing the models size can accelerate inference. Enjoy. ## AAIP Community Join our discord server and ask guest directly or discuss related topics with the community. https://discord.gg/5Pj446VKNU ### References Eldar Kurtic: https://www.linkedin.com/in/eldar-kurti%C4%87-77963b160/ Neural Magic: https://neuralmagic.com/ IST Austria Alistarh Group: https://ist.ac.at/en/research/alistarh-group/

Audio
Featured in this Episode

No persons identified in this episode.

Transcription

This episode hasn't been transcribed yet

Help us prioritize this episode for transcription by upvoting it.

0 upvotes
🗳️ Sign in to Upvote

Popular episodes get transcribed faster

Comments

There are no comments yet.

Please log in to write the first comment.