Menu
Sign In Search Podcasts Charts People & Topics Add Podcast API Blog Pricing

Andrew Ilyas

๐Ÿ‘ค Speaker
638 total appearances

Appearances Over Time

Podcast Appearances

Machine Learning Street Talk (MLST)
Adversarial Examples and Data Modelling - Andrew Ilyas (MIT)

generalized out of domain and things like that.

Machine Learning Street Talk (MLST)
Adversarial Examples and Data Modelling - Andrew Ilyas (MIT)

And I'm sure there are other notions of robustness as well.

Machine Learning Street Talk (MLST)
Adversarial Examples and Data Modelling - Andrew Ilyas (MIT)

And the interesting thing is each of these robustness problems is on its own like a huge challenge.

Machine Learning Street Talk (MLST)
Adversarial Examples and Data Modelling - Andrew Ilyas (MIT)

And what we actually want is like the union of all of these different notions of robustness.

Machine Learning Street Talk (MLST)
Adversarial Examples and Data Modelling - Andrew Ilyas (MIT)

So I think we're a bit away from that, but a really interesting problem.

Machine Learning Street Talk (MLST)
Adversarial Examples and Data Modelling - Andrew Ilyas (MIT)

Yeah, so in the paper, we studied a very common algorithm for trying to deal with adversarial examples, which is this adversarial training or robust optimization algorithm.

Machine Learning Street Talk (MLST)
Adversarial Examples and Data Modelling - Andrew Ilyas (MIT)

And the idea behind that algorithm is that as you're training your model at each stage, rather than training on clean inputs, you train on adversarial examples for the given model.

Machine Learning Street Talk (MLST)
Adversarial Examples and Data Modelling - Andrew Ilyas (MIT)

And so what that does is basically turn your original sort of like loss minimization problem that you're solving when you train a neural network, you turn it into this sort of robust optimization min-max style problem where you're now finding parameters that minimize the worst case, the loss on the worst case image rather than the loss on the average image.

Machine Learning Street Talk (MLST)
Adversarial Examples and Data Modelling - Andrew Ilyas (MIT)

And under this sort of robust, non-robust features view, I think you can sort of view this robust optimization or adversarial training algorithm as basically beating the non-robust features out of the neural network.

Machine Learning Street Talk (MLST)
Adversarial Examples and Data Modelling - Andrew Ilyas (MIT)

Because if at any point the neural network relies on a non-robust feature, that non-robust feature can be exploited by the adversary.

Machine Learning Street Talk (MLST)
Adversarial Examples and Data Modelling - Andrew Ilyas (MIT)

and, you know, forced to point the other way.

Machine Learning Street Talk (MLST)
Adversarial Examples and Data Modelling - Andrew Ilyas (MIT)

And the network will sort of learn, OK, I can't rely on that feature.

Machine Learning Street Talk (MLST)
Adversarial Examples and Data Modelling - Andrew Ilyas (MIT)

I can't rely on this feature.

Machine Learning Street Talk (MLST)
Adversarial Examples and Data Modelling - Andrew Ilyas (MIT)

I have to rely on these more robust features.

Machine Learning Street Talk (MLST)
Adversarial Examples and Data Modelling - Andrew Ilyas (MIT)

Absolutely.

Machine Learning Street Talk (MLST)
Adversarial Examples and Data Modelling - Andrew Ilyas (MIT)

I think the other two options are really interesting.

Machine Learning Street Talk (MLST)
Adversarial Examples and Data Modelling - Andrew Ilyas (MIT)

I would say that there's been significantly less progress along those other two options.

Machine Learning Street Talk (MLST)
Adversarial Examples and Data Modelling - Andrew Ilyas (MIT)

The first option I think is really compelling.

Machine Learning Street Talk (MLST)
Adversarial Examples and Data Modelling - Andrew Ilyas (MIT)

And we actually showed that it is, we showed in this sort of adversarial examples are not bugs, they're features paper.

Machine Learning Street Talk (MLST)
Adversarial Examples and Data Modelling - Andrew Ilyas (MIT)

We showed a way of pre-processing data to get you a very small amount of robustness using an existing robust network.