Menu
Sign In Search Podcasts Charts People & Topics Add Podcast API Pricing
Podcast Image

Austrian Artificial Intelligence Podcast

24.1 Hamid Eghbal-zadeh - JKU : Improving out of distribution performance with robust and disentangled representations - Part 1/2

08 Apr 2022

Description

This is the first part of my interview with Hamid Eghbal-zadeh, post-doc at the Johannes Kepler University at the Institute of Machine Learning. In the interview, we are talking about his research on a series of different aspects of representation learning with deep neural networks in order to make them more robust and improve their out-of-distribution behavior. In this first part, we are talking about the origin of representation learning and data augmentation. Hamid explains his research on the effects of representation learning on model training and highlights some of the important caveats that data augmentation can have on the robustness of your models. References: Personal Homepage: https://eghbalz.github.io/ Hamid on LinkedIn: https://www.linkedin.com/in/hamid-eghbal-zadeh-8642b3a8/ H. Eghbal-zadeh, Representation Learning and Inference from Signals and      Sequences, PhD Thesis, 2019. H. Eghbal-zadeh, F. Henkel, G.      Widmer, Context-Adaptive      Reinforcement Learning using Unsupervised Learning of Context Variables, In      Proceedings of Machine Learning Research, NeurIPS 2020 Workshop on      Pre-registration in Machine Learning, PMLR 148:236-254, 2021.

Audio
Featured in this Episode

No persons identified in this episode.

Transcription

This episode hasn't been transcribed yet

Help us prioritize this episode for transcription by upvoting it.

0 upvotes
🗳️ Sign in to Upvote

Popular episodes get transcribed faster

Comments

There are no comments yet.

Please log in to write the first comment.