Daniel Jeffries (Unknown)
๐ค SpeakerAppearances Over Time
Podcast Appearances
So you're an MIT researcher and obviously I'm sure MIT has like massive amounts of compute resources and so on.
But, you know, obviously you could go and work for Anthropic and they've got even more.
But there's always this notion that you're a little bit restricted.
So we're kind of we're building smaller models of analysis.
I mean, in a perfect world, would you just like to have open AI's compute cluster and you would play with that?
Yeah, I think it's interesting because having all of that compute almost makes you intellectually lazy.
So the easy thing to do is just to run bigger and bigger experiments.
And that's why I love a lot of the work out of MIT.
I don't know if you know Kevin Ellis under Josh Tannenbaum.
He's doing this DreamCoder thing for program synthesis.
And even though at the moment the results are not state of the art, I just love this idea of having a principled approach.
And I think that it will pay off in the long run.
And so I'm really happy that there are folks like you actually, you know, doing theory and going to first principles.
Amazing, amazing.
Well, going back to the features thing.
So we're talking about the kinds of representations these models learn.
and, you know, how robust they are.
Now, you wrote a landmark paper a few years ago called Adversarial Examples Are Not Bugs, They Are Features.
This was really good.
I learned about this from Hardy Solomon.