Sergey Levine
๐ค SpeakerAppearances Over Time
Podcast Appearances
It's putting together things you've seen like that in new ways.
And it's like, you know, arguably there's nothing like profoundly new here because like, yes, you've seen different words written that way, but you figured out that now you can compose the words in this other language the same way that you've composed words in English.
So that's actually where the emergent capabilities come from.
And
Because of this, in principle, if we have a sufficient diversity of behaviors, the model should figure out that those behaviors can be composed in new ways as the situation calls for it.
We've actually seen things even with our current models, which I should say that I think they're in the grand scheme of things like looking back five years from now, we'll probably think that these are tiny in scale, but we've already seen what I would call emerging capabilities.
When we were playing around with some of our laundry folding policies,
Actually, we discovered this by accident.
The robot accidentally picked up two T-shirts out of the bin instead of one, starts folding the first one, the other one gets in the way, picks up the other one, throws it back in the bin.
And we're like, we didn't know it would do that.
Like, holy crap.
And then we tried to play around with it, and it's like, yep, it does that every time.
Like, you can...
drop in, you know, it's doing its work, drop something else on the table, just pick it up, put it back, right?
Okay, that's cool.
Shopping bag, it starts putting things in the shopping bag, the shopping bag tips over, it picks it back up and stands it upright.
We didn't tell anybody to collect data for that.
I'm sure somebody accidentally at some point or maybe intentionally picked up the shopping bag, but it's just...
You have this kind of compositionality that emerges when you do learning at scale.
And that's really where all these remarkable capabilities come from.