Menu
Sign In Search Podcasts Charts People & Topics Add Podcast API Blog Pricing

Andrew Ilyas

๐Ÿ‘ค Speaker
638 total appearances

Appearances Over Time

Podcast Appearances

Machine Learning Street Talk (MLST)
Adversarial Examples and Data Modelling - Andrew Ilyas (MIT)

But I think the core challenges there is that we have a very poor handle on like features in natural images.

Machine Learning Street Talk (MLST)
Adversarial Examples and Data Modelling - Andrew Ilyas (MIT)

Like we don't really have a way of like

Machine Learning Street Talk (MLST)
Adversarial Examples and Data Modelling - Andrew Ilyas (MIT)

I mean, if you think about what's the space of features for an image, it's basically infinite.

Machine Learning Street Talk (MLST)
Adversarial Examples and Data Modelling - Andrew Ilyas (MIT)

And we don't really have a great way of removing a non-robust feature or adding a non-robust feature or anything like that.

Machine Learning Street Talk (MLST)
Adversarial Examples and Data Modelling - Andrew Ilyas (MIT)

The fine tuning or robustifying networks post hoc is really interesting and has been a big source of study.

Machine Learning Street Talk (MLST)
Adversarial Examples and Data Modelling - Andrew Ilyas (MIT)

I think that there has been some work trying to do this via randomized smoothing at test time, for example, or by fine-tuning the network using a robust objective or something like that.

Machine Learning Street Talk (MLST)
Adversarial Examples and Data Modelling - Andrew Ilyas (MIT)

But I would say we've had less progress on that than algorithm-focused approaches.

Machine Learning Street Talk (MLST)
Adversarial Examples and Data Modelling - Andrew Ilyas (MIT)

Yeah, it's a fascinating question.

Machine Learning Street Talk (MLST)
Adversarial Examples and Data Modelling - Andrew Ilyas (MIT)

I think there are a lot of people studying this inductive or implicit bias as well, which deals with a very similar thing.

Machine Learning Street Talk (MLST)
Adversarial Examples and Data Modelling - Andrew Ilyas (MIT)

And I think all of these are getting at this core problem that the space of features that neural networks can learn is unfathomably large because of how over-parametrized they are and the fact that they can, for example, memorize the entire training set with random labels.

Machine Learning Street Talk (MLST)
Adversarial Examples and Data Modelling - Andrew Ilyas (MIT)

And so they have almost infinite features to choose from.

Machine Learning Street Talk (MLST)
Adversarial Examples and Data Modelling - Andrew Ilyas (MIT)

And so there's this interesting question of like, okay, if we just leave things alone and we just run SGD with this architecture, what features will this network actually converge to?

Machine Learning Street Talk (MLST)
Adversarial Examples and Data Modelling - Andrew Ilyas (MIT)

And I think that's almost like a more relevant question than like what features can they represent?

Machine Learning Street Talk (MLST)
Adversarial Examples and Data Modelling - Andrew Ilyas (MIT)

Because the answer to the latter question is almost always like any feature.

Machine Learning Street Talk (MLST)
Adversarial Examples and Data Modelling - Andrew Ilyas (MIT)

And so you can view like, I think, I know the work you were talking about, about like the mass autoencoders, you can view this as like sort of what knobs do we have to change neural networks inductive biases?

Machine Learning Street Talk (MLST)
Adversarial Examples and Data Modelling - Andrew Ilyas (MIT)

And similarly, like the paper by Rob Garros, the texture versus shape bias one.

Machine Learning Street Talk (MLST)
Adversarial Examples and Data Modelling - Andrew Ilyas (MIT)

They similarly were trying to figure out what knobs do we have to play with to change this texture versus shape bias, whether that's the addition of training data that has different styles with the same class.

Machine Learning Street Talk (MLST)
Adversarial Examples and Data Modelling - Andrew Ilyas (MIT)

And I think a really natural perspective to view adversarial training from robust optimization

Machine Learning Street Talk (MLST)
Adversarial Examples and Data Modelling - Andrew Ilyas (MIT)

is exactly saying like, okay, the inductive bias or the implicit bias of our neural networks is leading us towards features that are great, just happen to be not adversarially robust.

Machine Learning Street Talk (MLST)
Adversarial Examples and Data Modelling - Andrew Ilyas (MIT)

And so you can view adversarial training as basically trying to change those inductive biases so that they lead us towards features that are robust.