John Schulman
๐ค SpeakerAppearances Over Time
Podcast Appearances
Actually, you could also say maybe in the future, we'll say the model itself.
So I would say we're not going there yet.
But anyway, they...
yeah, we have these different stakeholders.
Sometimes they have conflicting demands and we have to make some call on how to resolve those conflicts.
And it's not always obvious how to do that.
So I would say we had to think through, yeah, we just had to think through the trade-offs and basically the rough heuristic is that we mostly want the models to
follow your instructions and be helpful to the user and the developer.
But when this impinges on other people's
and other people's happiness or way of life, this becomes a problem and we have to block certain kinds of usage.
But we don't want to be too, we mostly want the models to just be an extension of people's will and do what they say.
We don't want to be too paternalistic.
We want to be kind of neutral and not like impose our opinions on people.
Yeah, we want to mostly let people do what they want with the models.
Like in this case, you really are going after the edge cases.
Yeah, we wanted it to be very actionable so that it wasn't just a bunch of nice sounding principles, but it was like each example kind of tells you something about some non-obvious situation and reasons through that situation.
Everyone has their complaints about the ML literature, but I would say overall, I think it's a relatively healthy field compared to some other ones like in the social sciences, just because, well, it's largely grounded in practicality and getting things to work.
If you publish something that can't be replicated easily, then people will just forget about it.
It's accepted that often you don't just report someone's number from their paper, you also try to re-implement their method and compare it to your method on the same
say on the same training data set.