Menu
Sign In Search Podcasts Charts People & Topics Add Podcast API Blog Pricing

Joscha Bach

๐Ÿ‘ค Speaker
1434 total appearances

Appearances Over Time

Podcast Appearances

Lex Fridman Podcast
#392 โ€“ Joscha Bach: Life, Intelligence, Consciousness, AI & the Future of Humans

If you try to map this into a metaphor that is closer to our brain, imagine that you would take a language model or a model like DALL-E, that you use, for instance, image-guided diffusion to approximate a camera image and use the activation state of the neural network to interpret the camera image, which in principle I think will be possible very soon. You do this periodically.

Lex Fridman Podcast
#392 โ€“ Joscha Bach: Life, Intelligence, Consciousness, AI & the Future of Humans

And now you look at these patterns, how, when this thing interacts with the world periodically, look like in time. And these time slices, they are somewhat equivalent to the activation state of the brain at a given moment.

Lex Fridman Podcast
#392 โ€“ Joscha Bach: Life, Intelligence, Consciousness, AI & the Future of Humans

And now you look at these patterns, how, when this thing interacts with the world periodically, look like in time. And these time slices, they are somewhat equivalent to the activation state of the brain at a given moment.

Lex Fridman Podcast
#392 โ€“ Joscha Bach: Life, Intelligence, Consciousness, AI & the Future of Humans

And now you look at these patterns, how, when this thing interacts with the world periodically, look like in time. And these time slices, they are somewhat equivalent to the activation state of the brain at a given moment.

Lex Fridman Podcast
#392 โ€“ Joscha Bach: Life, Intelligence, Consciousness, AI & the Future of Humans

For me, it's fascinating that they are so vastly different and yet in some circumstances produce somewhat similar behavior. And the brain is first of all different because it's a self-organizing system where the individual cell is an agent that is communicating with the other agents around it and is always trying to find some solution. And all the structure that pops up is emergent structure.

Lex Fridman Podcast
#392 โ€“ Joscha Bach: Life, Intelligence, Consciousness, AI & the Future of Humans

For me, it's fascinating that they are so vastly different and yet in some circumstances produce somewhat similar behavior. And the brain is first of all different because it's a self-organizing system where the individual cell is an agent that is communicating with the other agents around it and is always trying to find some solution. And all the structure that pops up is emergent structure.

Lex Fridman Podcast
#392 โ€“ Joscha Bach: Life, Intelligence, Consciousness, AI & the Future of Humans

For me, it's fascinating that they are so vastly different and yet in some circumstances produce somewhat similar behavior. And the brain is first of all different because it's a self-organizing system where the individual cell is an agent that is communicating with the other agents around it and is always trying to find some solution. And all the structure that pops up is emergent structure.

Lex Fridman Podcast
#392 โ€“ Joscha Bach: Life, Intelligence, Consciousness, AI & the Future of Humans

Right. So one way in which you could try to look at this is that individual neurons probably need to get a reward so they become trainable, which means they have to have inputs that are not affecting the metabolism of the cell directly, but they are messages, semantic messages that tell the cell whether it has done good or bad and in which direction it should shift its behavior.

Lex Fridman Podcast
#392 โ€“ Joscha Bach: Life, Intelligence, Consciousness, AI & the Future of Humans

Right. So one way in which you could try to look at this is that individual neurons probably need to get a reward so they become trainable, which means they have to have inputs that are not affecting the metabolism of the cell directly, but they are messages, semantic messages that tell the cell whether it has done good or bad and in which direction it should shift its behavior.

Lex Fridman Podcast
#392 โ€“ Joscha Bach: Life, Intelligence, Consciousness, AI & the Future of Humans

Right. So one way in which you could try to look at this is that individual neurons probably need to get a reward so they become trainable, which means they have to have inputs that are not affecting the metabolism of the cell directly, but they are messages, semantic messages that tell the cell whether it has done good or bad and in which direction it should shift its behavior.

Lex Fridman Podcast
#392 โ€“ Joscha Bach: Life, Intelligence, Consciousness, AI & the Future of Humans

Once you have such an input, neurons become trainable and you can train them to perform computations by exchanging messages with other neurons. And parts of the signals that they are exchanging and parts of the computation that they're performing are control messages that perform management tasks for other neurons and other cells.

Lex Fridman Podcast
#392 โ€“ Joscha Bach: Life, Intelligence, Consciousness, AI & the Future of Humans

Once you have such an input, neurons become trainable and you can train them to perform computations by exchanging messages with other neurons. And parts of the signals that they are exchanging and parts of the computation that they're performing are control messages that perform management tasks for other neurons and other cells.

Lex Fridman Podcast
#392 โ€“ Joscha Bach: Life, Intelligence, Consciousness, AI & the Future of Humans

Once you have such an input, neurons become trainable and you can train them to perform computations by exchanging messages with other neurons. And parts of the signals that they are exchanging and parts of the computation that they're performing are control messages that perform management tasks for other neurons and other cells.

Lex Fridman Podcast
#392 โ€“ Joscha Bach: Life, Intelligence, Consciousness, AI & the Future of Humans

I also suspect that the brain does not stop at the boundary of neurons to other cells, but many adjacent cells will be involved intimately in the functionality of the brain and will be instrumental in distributing rewards and in managing its functionality.

Lex Fridman Podcast
#392 โ€“ Joscha Bach: Life, Intelligence, Consciousness, AI & the Future of Humans

I also suspect that the brain does not stop at the boundary of neurons to other cells, but many adjacent cells will be involved intimately in the functionality of the brain and will be instrumental in distributing rewards and in managing its functionality.

Lex Fridman Podcast
#392 โ€“ Joscha Bach: Life, Intelligence, Consciousness, AI & the Future of Humans

I also suspect that the brain does not stop at the boundary of neurons to other cells, but many adjacent cells will be involved intimately in the functionality of the brain and will be instrumental in distributing rewards and in managing its functionality.

Lex Fridman Podcast
#392 โ€“ Joscha Bach: Life, Intelligence, Consciousness, AI & the Future of Humans

So first of all, there's a different loss function at work when we learn. And to me, it's fascinating that you can build a system that looks at 800 million pictures and captions and correlates them. Because I don't think that a human nervous system could do this.

Lex Fridman Podcast
#392 โ€“ Joscha Bach: Life, Intelligence, Consciousness, AI & the Future of Humans

So first of all, there's a different loss function at work when we learn. And to me, it's fascinating that you can build a system that looks at 800 million pictures and captions and correlates them. Because I don't think that a human nervous system could do this.

Lex Fridman Podcast
#392 โ€“ Joscha Bach: Life, Intelligence, Consciousness, AI & the Future of Humans

So first of all, there's a different loss function at work when we learn. And to me, it's fascinating that you can build a system that looks at 800 million pictures and captions and correlates them. Because I don't think that a human nervous system could do this.

Lex Fridman Podcast
#392 โ€“ Joscha Bach: Life, Intelligence, Consciousness, AI & the Future of Humans

For us, the world is only learnable because the adjacent frames are related, and we can afford to discard most of that information during learning. We basically take only in stuff that makes us more coherent, not less coherent. And our neural networks are willing to look at data that is not making the neural network coherent at first, but only in the long run.