Joscha Bach
๐ค SpeakerAppearances Over Time
Podcast Appearances
If you try to map this into a metaphor that is closer to our brain, imagine that you would take a language model or a model like DALL-E, that you use, for instance, image-guided diffusion to approximate a camera image and use the activation state of the neural network to interpret the camera image, which in principle I think will be possible very soon. You do this periodically.
And now you look at these patterns, how, when this thing interacts with the world periodically, look like in time. And these time slices, they are somewhat equivalent to the activation state of the brain at a given moment.
And now you look at these patterns, how, when this thing interacts with the world periodically, look like in time. And these time slices, they are somewhat equivalent to the activation state of the brain at a given moment.
And now you look at these patterns, how, when this thing interacts with the world periodically, look like in time. And these time slices, they are somewhat equivalent to the activation state of the brain at a given moment.
For me, it's fascinating that they are so vastly different and yet in some circumstances produce somewhat similar behavior. And the brain is first of all different because it's a self-organizing system where the individual cell is an agent that is communicating with the other agents around it and is always trying to find some solution. And all the structure that pops up is emergent structure.
For me, it's fascinating that they are so vastly different and yet in some circumstances produce somewhat similar behavior. And the brain is first of all different because it's a self-organizing system where the individual cell is an agent that is communicating with the other agents around it and is always trying to find some solution. And all the structure that pops up is emergent structure.
For me, it's fascinating that they are so vastly different and yet in some circumstances produce somewhat similar behavior. And the brain is first of all different because it's a self-organizing system where the individual cell is an agent that is communicating with the other agents around it and is always trying to find some solution. And all the structure that pops up is emergent structure.
Right. So one way in which you could try to look at this is that individual neurons probably need to get a reward so they become trainable, which means they have to have inputs that are not affecting the metabolism of the cell directly, but they are messages, semantic messages that tell the cell whether it has done good or bad and in which direction it should shift its behavior.
Right. So one way in which you could try to look at this is that individual neurons probably need to get a reward so they become trainable, which means they have to have inputs that are not affecting the metabolism of the cell directly, but they are messages, semantic messages that tell the cell whether it has done good or bad and in which direction it should shift its behavior.
Right. So one way in which you could try to look at this is that individual neurons probably need to get a reward so they become trainable, which means they have to have inputs that are not affecting the metabolism of the cell directly, but they are messages, semantic messages that tell the cell whether it has done good or bad and in which direction it should shift its behavior.
Once you have such an input, neurons become trainable and you can train them to perform computations by exchanging messages with other neurons. And parts of the signals that they are exchanging and parts of the computation that they're performing are control messages that perform management tasks for other neurons and other cells.
Once you have such an input, neurons become trainable and you can train them to perform computations by exchanging messages with other neurons. And parts of the signals that they are exchanging and parts of the computation that they're performing are control messages that perform management tasks for other neurons and other cells.
Once you have such an input, neurons become trainable and you can train them to perform computations by exchanging messages with other neurons. And parts of the signals that they are exchanging and parts of the computation that they're performing are control messages that perform management tasks for other neurons and other cells.
I also suspect that the brain does not stop at the boundary of neurons to other cells, but many adjacent cells will be involved intimately in the functionality of the brain and will be instrumental in distributing rewards and in managing its functionality.
I also suspect that the brain does not stop at the boundary of neurons to other cells, but many adjacent cells will be involved intimately in the functionality of the brain and will be instrumental in distributing rewards and in managing its functionality.
I also suspect that the brain does not stop at the boundary of neurons to other cells, but many adjacent cells will be involved intimately in the functionality of the brain and will be instrumental in distributing rewards and in managing its functionality.
So first of all, there's a different loss function at work when we learn. And to me, it's fascinating that you can build a system that looks at 800 million pictures and captions and correlates them. Because I don't think that a human nervous system could do this.
So first of all, there's a different loss function at work when we learn. And to me, it's fascinating that you can build a system that looks at 800 million pictures and captions and correlates them. Because I don't think that a human nervous system could do this.
So first of all, there's a different loss function at work when we learn. And to me, it's fascinating that you can build a system that looks at 800 million pictures and captions and correlates them. Because I don't think that a human nervous system could do this.
For us, the world is only learnable because the adjacent frames are related, and we can afford to discard most of that information during learning. We basically take only in stuff that makes us more coherent, not less coherent. And our neural networks are willing to look at data that is not making the neural network coherent at first, but only in the long run.