Menu
Sign In Search Podcasts Charts People & Topics Add Podcast API Blog Pricing

Daniel Jeffries (Unknown)

๐Ÿ‘ค Speaker
209 total appearances

Appearances Over Time

Podcast Appearances

Machine Learning Street Talk (MLST)
Adversarial Examples and Data Modelling - Andrew Ilyas (MIT)

And he said the problem with the autoencoders is they bias towards low frequency data.

Machine Learning Street Talk (MLST)
Adversarial Examples and Data Modelling - Andrew Ilyas (MIT)

It looks like quite blurry kind of data and you can mitigate that by just adding some pink noise or something to the image to try and convince the model not to learn so much of the mass of the low frequency data.

Machine Learning Street Talk (MLST)
Adversarial Examples and Data Modelling - Andrew Ilyas (MIT)

And there has been discussion about, for example, we tend to look at shapes and objects.

Machine Learning Street Talk (MLST)
Adversarial Examples and Data Modelling - Andrew Ilyas (MIT)

and image classifiers tend to kind of look at, you know, textures and stuff like that.

Machine Learning Street Talk (MLST)
Adversarial Examples and Data Modelling - Andrew Ilyas (MIT)

So, I don't know, how do you kind of think about different types of features that these models learn?

Machine Learning Street Talk (MLST)
Adversarial Examples and Data Modelling - Andrew Ilyas (MIT)

So when you do this kind of robustness training, I guess what you're trying to do is selectively tell the model, don't learn these features, but do learn those features.

Machine Learning Street Talk (MLST)
Adversarial Examples and Data Modelling - Andrew Ilyas (MIT)

And there's a variety of different ways of doing this, as we just discussed.

Machine Learning Street Talk (MLST)
Adversarial Examples and Data Modelling - Andrew Ilyas (MIT)

But in a sense, you're kind of blindfolding the model.

Machine Learning Street Talk (MLST)
Adversarial Examples and Data Modelling - Andrew Ilyas (MIT)

So the model now cannot learn all of the things it would otherwise have learned.

Machine Learning Street Talk (MLST)
Adversarial Examples and Data Modelling - Andrew Ilyas (MIT)

So there must be like some kind of trade off with the accuracy.

Machine Learning Street Talk (MLST)
Adversarial Examples and Data Modelling - Andrew Ilyas (MIT)

Yeah, that's so interesting you say that because, again, when I spoke with Randall Belistriero, he's got this paper out, which is called something to do with a spline theory gives rise to adversarial grokking.

Machine Learning Street Talk (MLST)
Adversarial Examples and Data Modelling - Andrew Ilyas (MIT)

That's not the name of the title, but something like that.

Machine Learning Street Talk (MLST)
Adversarial Examples and Data Modelling - Andrew Ilyas (MIT)

And he's got this idea that in the spline theory of neural networks, you know, MLP networks, they partition the ambient space up into these piecewise constant affine, locally affine regions in an input sensitive way.

Machine Learning Street Talk (MLST)
Adversarial Examples and Data Modelling - Andrew Ilyas (MIT)

And early on in training, when you visualize the ambient space and how it's being partitioned by the rally functions, you know, it's just a complete mess.

Machine Learning Street Talk (MLST)
Adversarial Examples and Data Modelling - Andrew Ilyas (MIT)

And when you train them for a ridiculous amount of time, so, you know, much, much longer than any normal network would,

Machine Learning Street Talk (MLST)
Adversarial Examples and Data Modelling - Andrew Ilyas (MIT)

you get this local decomplexification.

Machine Learning Street Talk (MLST)
Adversarial Examples and Data Modelling - Andrew Ilyas (MIT)

So what happens is all of these boundaries form, they kind of compress into this kind of, it looks like a Voronoi diagram, you know, like a kind of contour map, and you get more and more space outside the input examples.

Machine Learning Street Talk (MLST)
Adversarial Examples and Data Modelling - Andrew Ilyas (MIT)

And maybe it's related to this double descent phenomenon, but something interesting happens if you continue to train neural networks for a long time, they robustify.

Machine Learning Street Talk (MLST)
Adversarial Examples and Data Modelling - Andrew Ilyas (MIT)

Does that in any way go against the argument that, you know, maybe they are bugs?

Machine Learning Street Talk (MLST)
Adversarial Examples and Data Modelling - Andrew Ilyas (MIT)

Go against the argument that they are bugs or go against the argument that they are bugs?