Menu
Sign In Search Podcasts Charts People & Topics Add Podcast API Blog Pricing

Andrew Ilyas

๐Ÿ‘ค Speaker
638 total appearances

Appearances Over Time

Podcast Appearances

Machine Learning Street Talk (MLST)
Adversarial Examples and Data Modelling - Andrew Ilyas (MIT)

sort of the open AI scale.

Machine Learning Street Talk (MLST)
Adversarial Examples and Data Modelling - Andrew Ilyas (MIT)

But I don't think we're at a lack of interesting questions that can be studied at our scale.

Machine Learning Street Talk (MLST)
Adversarial Examples and Data Modelling - Andrew Ilyas (MIT)

I think there are tons of them.

Machine Learning Street Talk (MLST)
Adversarial Examples and Data Modelling - Andrew Ilyas (MIT)

Absolutely.

Machine Learning Street Talk (MLST)
Adversarial Examples and Data Modelling - Andrew Ilyas (MIT)

OK, so taking a step back, the context here is we're looking at adversarial examples in the vision context, which is this phenomenon where you can add a very small perturbation to a natural image.

Machine Learning Street Talk (MLST)
Adversarial Examples and Data Modelling - Andrew Ilyas (MIT)

And by adding that small perturbation, a machine learning model that normally does very, very well will consistently misbehave whenever it sees these perturbed images.

Machine Learning Street Talk (MLST)
Adversarial Examples and Data Modelling - Andrew Ilyas (MIT)

And so we were trying to figure out why.

Machine Learning Street Talk (MLST)
Adversarial Examples and Data Modelling - Andrew Ilyas (MIT)

And I would say that the paper sort of centered around one experiment and one result that we found particularly surprising.

Machine Learning Street Talk (MLST)
Adversarial Examples and Data Modelling - Andrew Ilyas (MIT)

So just to set the stage a little bit, I think at the time when we were writing this paper,

Machine Learning Street Talk (MLST)
Adversarial Examples and Data Modelling - Andrew Ilyas (MIT)

the sort of conceptual model that people had of adversarial examples is that they were somehow what we refer to as bugs.

Machine Learning Street Talk (MLST)
Adversarial Examples and Data Modelling - Andrew Ilyas (MIT)

But you can think of these in a variety of ways.

Machine Learning Street Talk (MLST)
Adversarial Examples and Data Modelling - Andrew Ilyas (MIT)

They're essentially like when you make these adversarial examples, you're really sort of like, there are a bunch of words for it, like leaving the image manifold, you're sort of adding something, you're adding something useless.

Machine Learning Street Talk (MLST)
Adversarial Examples and Data Modelling - Andrew Ilyas (MIT)

There is this intuition that when you train a neural network, it learns a bunch of useful features, and then it also is sensitive to a bunch of useless features.

Machine Learning Street Talk (MLST)
Adversarial Examples and Data Modelling - Andrew Ilyas (MIT)

And that could be because of overfitting, could be because of finite sample error.

Machine Learning Street Talk (MLST)
Adversarial Examples and Data Modelling - Andrew Ilyas (MIT)

It doesn't matter what.

Machine Learning Street Talk (MLST)
Adversarial Examples and Data Modelling - Andrew Ilyas (MIT)

And then the intuition went, OK, well, now that you've learned a bunch of useful features and a bunch of useless features, an adversary can come in at test time.

Machine Learning Street Talk (MLST)
Adversarial Examples and Data Modelling - Andrew Ilyas (MIT)

They can change all of these useless features.

Machine Learning Street Talk (MLST)
Adversarial Examples and Data Modelling - Andrew Ilyas (MIT)

They won't have changed anything about the input, which is why it looks the same to us.

Machine Learning Street Talk (MLST)
Adversarial Examples and Data Modelling - Andrew Ilyas (MIT)

But the machine learning model that depended on these useless features is now going to be completely misled.

Machine Learning Street Talk (MLST)
Adversarial Examples and Data Modelling - Andrew Ilyas (MIT)

So that's the conceptual model going into this work.