Menu
Sign In Search Podcasts Charts People & Topics Add Podcast API Pricing
Podcast Image

The TWIML AI Podcast (formerly This Week in Machine Learning & Artificial Intelligence)

Neural Network Quantization and Compression with Tijmen Blankevoort - TWIML Talk #292

19 Aug 2019

Description

Today we’re joined by Tijmen Blankevoort, a staff engineer at Qualcomm, who leads their compression and quantization research teams. In our conversation with Tijmen we discuss:  • The ins and outs of compression and quantization of ML models, specifically NNs, • How much models can actually be compressed, and the best way to achieve compression,  • We also look at a few recent papers including “Lottery Hypothesis."

Audio
Featured in this Episode

No persons identified in this episode.

Transcription

This episode hasn't been transcribed yet

Help us prioritize this episode for transcription by upvoting it.

0 upvotes
🗳️ Sign in to Upvote

Popular episodes get transcribed faster

Comments

There are no comments yet.

Please log in to write the first comment.