Menu
Sign In Search Podcasts Charts People & Topics Add Podcast API Blog Pricing

Andrew Ilyas

๐Ÿ‘ค Speaker
638 total appearances

Appearances Over Time

Podcast Appearances

Machine Learning Street Talk (MLST)
Adversarial Examples and Data Modelling - Andrew Ilyas (MIT)

if we forget for a second about the data collection process and we just assume that you have a data set, clearly changing that data set is going to change model predictions in some way.

Machine Learning Street Talk (MLST)
Adversarial Examples and Data Modelling - Andrew Ilyas (MIT)

And so what we were asking is, can we, without actually thinking about the very mechanistic details of the learning algorithm itself, can we sort of black box that away and think of machine learning as just a map directly from training data set to prediction?

Machine Learning Street Talk (MLST)
Adversarial Examples and Data Modelling - Andrew Ilyas (MIT)

It's an honor to be here.

Machine Learning Street Talk (MLST)
Adversarial Examples and Data Modelling - Andrew Ilyas (MIT)

Yeah, so my name is Andrew.

Machine Learning Street Talk (MLST)
Adversarial Examples and Data Modelling - Andrew Ilyas (MIT)

I'm a sixth year PhD student at MIT.

Machine Learning Street Talk (MLST)
Adversarial Examples and Data Modelling - Andrew Ilyas (MIT)

I'm advised by Alexander Modry and Kostas Daskalakis, hopefully graduating soon.

Machine Learning Street Talk (MLST)
Adversarial Examples and Data Modelling - Andrew Ilyas (MIT)

I work a lot on robustness and reliability with a focus on sort of looking at the entire machine learning pipeline from how we collect data to how we make it into data sets to what learning algorithms we use, and really trying to take a step back and look at the entire pipeline to answer questions about robustness and reliability.

Machine Learning Street Talk (MLST)
Adversarial Examples and Data Modelling - Andrew Ilyas (MIT)

Yeah, absolutely.

Machine Learning Street Talk (MLST)
Adversarial Examples and Data Modelling - Andrew Ilyas (MIT)

I think a big goal of my work is what I'd call predictability in machine learning systems.

Machine Learning Street Talk (MLST)
Adversarial Examples and Data Modelling - Andrew Ilyas (MIT)

So really whether we can understand the principles behind why they work well enough that when we put them into production, we understand both when they're going to work and when they're not going to work, and also ideally why.

Machine Learning Street Talk (MLST)
Adversarial Examples and Data Modelling - Andrew Ilyas (MIT)

Yeah, so I started at MIT in 2015.

Machine Learning Street Talk (MLST)
Adversarial Examples and Data Modelling - Andrew Ilyas (MIT)

Towards the end of my undergrad, I got really interested in this phenomenon of adversarial examples, just doing undergrad research with a couple of my friends.

Machine Learning Street Talk (MLST)
Adversarial Examples and Data Modelling - Andrew Ilyas (MIT)

We worked on a couple of papers together and just got really excited about the field.

Machine Learning Street Talk (MLST)
Adversarial Examples and Data Modelling - Andrew Ilyas (MIT)

um and i sort of into my phd work continued that uh interest for a while um working on both you know developing attacks but also trying to understand trying to start an understanding of why these things even arise um and i think you know gradually i can explain the whole path but gradually that brought me along to this uh to this conclusion that we really need to understand like the interaction between

Machine Learning Street Talk (MLST)
Adversarial Examples and Data Modelling - Andrew Ilyas (MIT)

training data and models and basically trying to get at some core of why machine learning works the way it does.

Machine Learning Street Talk (MLST)
Adversarial Examples and Data Modelling - Andrew Ilyas (MIT)

Yeah, so an adversarial example is just a very small perturbation to a natural input so that a machine learning model doing inference or behaving on that input misbehaves.

Machine Learning Street Talk (MLST)
Adversarial Examples and Data Modelling - Andrew Ilyas (MIT)

And so in the context of images, that could be changing a couple of pixels so that a classifier misclassifies the image.

Machine Learning Street Talk (MLST)
Adversarial Examples and Data Modelling - Andrew Ilyas (MIT)

There's been recent work on this in the context of language, where now you're trying to append a very small suffix or prefix to your prompt so that the language model does some unintended behavior.

Machine Learning Street Talk (MLST)
Adversarial Examples and Data Modelling - Andrew Ilyas (MIT)

But broadly speaking, it's about slightly changing inputs to make machine learning models misbehave.

Machine Learning Street Talk (MLST)
Adversarial Examples and Data Modelling - Andrew Ilyas (MIT)

Yeah, so I think about it in at least four or five different steps.

โ† Previous Page 1 of 32 Next โ†’