Daniel Jeffries (Unknown)
๐ค SpeakerAppearances Over Time
Podcast Appearances
Does it work and how well does it work?
Yeah, interesting.
Because you know, neural networks learn statistics of increasing complexity as you train them for longer and as the models get bigger and so on.
And do you notice some, you know, like, for example, would a linear data model bias on simpler features or does it seem to work on more complex models as well?
Very cool.
So you're building this supervised regression model and you're sampling subsets of the data to train it.
What's the kind of the methodology there?
How do you sample the subsets?
So on this surrogate model, I mean, what is a good output to track, if that makes sense?
I mean, let's say we're doing classification or something like that.
What should we track?
So what's the difference between, because obviously we're talking about the surrogate model that has this more abstract notion of data set statistics.
I mean, how does that compare to just say, doing analysis on a specific model?
So when we've got this data model, what kind of stuff can we do with it?
One of the things you looked at in the paper was the strength of the embeddings of the data model versus using, say, the penultimate layer in the original model.
How does that work?
Yeah, so what did you find?
So, I mean, I suppose you can compare classes of learning algorithms, but these are also, I mean, potentially class specific.
So could you use them to kind of figure out confusion between classes and stuff like that?
Cool.