Michael Levin
๐ค SpeakerVoice Profile Active
This person's voice can be automatically recognized across podcast episodes using AI voice matching.
Appearances Over Time
Podcast Appearances
You have to, you know, you have to know what your, what your borders are.
So, um, that action, that action of aligning your parts and coming to be this, this, this, uh, I mean, I'm even going to say this emergence, we just don't have a good vocabulary for it.
This
This emergence of a model that aligns all the parts is really critical to keep that thing going.
There's something else that's really interesting.
And I was thinking about this in the context of this question of like, you know, these beautiful kind of ideas, you know, that...
There's this amazing thing that we found, and this is largely the work of Federico Pagosi and my group.
So a couple of years ago, we saw that networks of chemicals can learn.
They have five or six different kinds of learning that they can do.
And so what I asked them to do was to calculate the causal emergence of those networks while they're learning.
And what I mean by that is this.
If you're a rat and you learn to press a lever and get a reward...
There's no individual cell that had both experiences, right?
The cells at your paw had touched the lever.
The cells in your gut got the delicious reward.
No individual cell has that both experiences.
Who owns that associative memory?
Well, the rat.
So that means you have to be integrated, right?
If you're going to learn associative memories from different parts, you have to be an integrated agent that can do that.