Menu
Sign In Search Podcasts Charts People & Topics Add Podcast API Blog Pricing

Andrew Ilyas

๐Ÿ‘ค Speaker
638 total appearances

Appearances Over Time

Podcast Appearances

Machine Learning Street Talk (MLST)
Adversarial Examples and Data Modelling - Andrew Ilyas (MIT)

So can you try selecting the worst examples?

Machine Learning Street Talk (MLST)
Adversarial Examples and Data Modelling - Andrew Ilyas (MIT)

Can you try selecting the best examples?

Machine Learning Street Talk (MLST)
Adversarial Examples and Data Modelling - Andrew Ilyas (MIT)

All of these very highly non-random data sets.

Machine Learning Street Talk (MLST)
Adversarial Examples and Data Modelling - Andrew Ilyas (MIT)

And what we found is that across the board, these things worked pretty well.

Machine Learning Street Talk (MLST)
Adversarial Examples and Data Modelling - Andrew Ilyas (MIT)

And I would say follow...

Machine Learning Street Talk (MLST)
Adversarial Examples and Data Modelling - Andrew Ilyas (MIT)

Follow-up work that we've done shows that it works not just for CIFAR-10, but across a bunch of other different data sets, although how well it works for different data sets has varied a little bit.

Machine Learning Street Talk (MLST)
Adversarial Examples and Data Modelling - Andrew Ilyas (MIT)

As far as we can tell, it seems to work on more complex models as well.

Machine Learning Street Talk (MLST)
Adversarial Examples and Data Modelling - Andrew Ilyas (MIT)

We've tried scaling up the models pretty intensely.

Machine Learning Street Talk (MLST)
Adversarial Examples and Data Modelling - Andrew Ilyas (MIT)

And actually, there's this folklore intuition we have, which is completely unverified.

Machine Learning Street Talk (MLST)
Adversarial Examples and Data Modelling - Andrew Ilyas (MIT)

So take this with a grain of salt.

Machine Learning Street Talk (MLST)
Adversarial Examples and Data Modelling - Andrew Ilyas (MIT)

But if you think about when, as deep neural networks grow in width and in overparameterization, this linear approximation in their parameter space

Machine Learning Street Talk (MLST)
Adversarial Examples and Data Modelling - Andrew Ilyas (MIT)

tends to work better and better.

Machine Learning Street Talk (MLST)
Adversarial Examples and Data Modelling - Andrew Ilyas (MIT)

They sort of look more linear in their parameter space, and there's an interesting connection between linearity in parameter space and linearity in data space.

Machine Learning Street Talk (MLST)
Adversarial Examples and Data Modelling - Andrew Ilyas (MIT)

We sort of know that this linear approximation in data space works really well for very over-parameterized linear models.

Machine Learning Street Talk (MLST)
Adversarial Examples and Data Modelling - Andrew Ilyas (MIT)

And so, again, folklore intuition is that as your deep neural network approaches a very over-parameterized linear model, this linear approximation actually works even better.

Machine Learning Street Talk (MLST)
Adversarial Examples and Data Modelling - Andrew Ilyas (MIT)

Yeah, it's a great question that we thought a lot about, and I think

Machine Learning Street Talk (MLST)
Adversarial Examples and Data Modelling - Andrew Ilyas (MIT)

In an ideal world, you could come up with a really smart adaptive sampling algorithm for this problem because you know where you need information and where you don't need information.

Machine Learning Street Talk (MLST)
Adversarial Examples and Data Modelling - Andrew Ilyas (MIT)

You can think of each subset that you sample as giving you some extra information about the value of those data points or something like that.

Machine Learning Street Talk (MLST)
Adversarial Examples and Data Modelling - Andrew Ilyas (MIT)

Um, unfortunately I think because of sort of the way our compute is structured, that like we have tons of GPUs available at once.

Machine Learning Street Talk (MLST)
Adversarial Examples and Data Modelling - Andrew Ilyas (MIT)

Um, and so we like to saturate them all.