Menu
Sign In Search Podcasts Charts People & Topics Add Podcast API Blog Pricing

Daniel Jeffries (Unknown)

๐Ÿ‘ค Speaker
209 total appearances

Appearances Over Time

Podcast Appearances

Machine Learning Street Talk (MLST)
Adversarial Examples and Data Modelling - Andrew Ilyas (MIT)

I had him on the show.

Machine Learning Street Talk (MLST)
Adversarial Examples and Data Modelling - Andrew Ilyas (MIT)

Can you explain what this paper was about?

Machine Learning Street Talk (MLST)
Adversarial Examples and Data Modelling - Andrew Ilyas (MIT)

It's a fascinating thing about neural networks that, as you say, you can actually randomly assign labels, not even in a consistent way.

Machine Learning Street Talk (MLST)
Adversarial Examples and Data Modelling - Andrew Ilyas (MIT)

And the neural network will still get 100% accuracy on the train set.

Machine Learning Street Talk (MLST)
Adversarial Examples and Data Modelling - Andrew Ilyas (MIT)

And I guess that's because if you think about it, the neural networks have enough flexibility to kind of place decision boundaries around each of the individual training examples so they can basically memorize your training set a bit like a zip file or something like that.

Machine Learning Street Talk (MLST)
Adversarial Examples and Data Modelling - Andrew Ilyas (MIT)

So there's the spectrum, isn't there, that you can, on the one side, you can memorize examples.

Machine Learning Street Talk (MLST)
Adversarial Examples and Data Modelling - Andrew Ilyas (MIT)

And then going up a little bit on the spectrum, you can memorize features.

Machine Learning Street Talk (MLST)
Adversarial Examples and Data Modelling - Andrew Ilyas (MIT)

So like these weird blue dots that you're talking about.

Machine Learning Street Talk (MLST)
Adversarial Examples and Data Modelling - Andrew Ilyas (MIT)

And then somewhere up on the spectrum, you end up with robust, out-of-domain generalization features that actually represent the thing that you want.

Machine Learning Street Talk (MLST)
Adversarial Examples and Data Modelling - Andrew Ilyas (MIT)

So I guess the question is, how do you know the difference between it just memorizing examples versus non-robust features?

Machine Learning Street Talk (MLST)
Adversarial Examples and Data Modelling - Andrew Ilyas (MIT)

Yeah.

Machine Learning Street Talk (MLST)
Adversarial Examples and Data Modelling - Andrew Ilyas (MIT)

So how did you go about, you know, mitigating the robustness problem in the paper?

Machine Learning Street Talk (MLST)
Adversarial Examples and Data Modelling - Andrew Ilyas (MIT)

Yeah.

Machine Learning Street Talk (MLST)
Adversarial Examples and Data Modelling - Andrew Ilyas (MIT)

And I'm thinking about the different approaches.

Machine Learning Street Talk (MLST)
Adversarial Examples and Data Modelling - Andrew Ilyas (MIT)

I mean, hypothetically, you could preprocess the data before it even goes into the model.

Machine Learning Street Talk (MLST)
Adversarial Examples and Data Modelling - Andrew Ilyas (MIT)

You could change the optimization algorithm of the model itself, or given a trained model, you could kind of like, you know, robustify it.

Machine Learning Street Talk (MLST)
Adversarial Examples and Data Modelling - Andrew Ilyas (MIT)

So you're talking about the middle option where you're actually changing the optimization algorithm.

Machine Learning Street Talk (MLST)
Adversarial Examples and Data Modelling - Andrew Ilyas (MIT)

Yes.

Machine Learning Street Talk (MLST)
Adversarial Examples and Data Modelling - Andrew Ilyas (MIT)

Yeah, I mean, I'm interested in the different types of features, as you were just alluding to.

Machine Learning Street Talk (MLST)
Adversarial Examples and Data Modelling - Andrew Ilyas (MIT)

So Randall Belistriero yesterday, he had this paper which was talking about the difference between reconstructive methods like, you know, let's say a masked autoencoder versus, you know, a self-supervised contrastive image representation learning model.