Joscha Bach
๐ค SpeakerAppearances Over Time
Podcast Appearances
There are limitations in a language model alone. I feel that part of my mind works similarly to a language model, which means I can yell into it a prompt and it's going to give me a creative response. But I have to do something with this response first. I have to take it as a generative artifact that may or may not be true. It's usually a confabulation. It's just an idea.
There are limitations in a language model alone. I feel that part of my mind works similarly to a language model, which means I can yell into it a prompt and it's going to give me a creative response. But I have to do something with this response first. I have to take it as a generative artifact that may or may not be true. It's usually a confabulation. It's just an idea.
There are limitations in a language model alone. I feel that part of my mind works similarly to a language model, which means I can yell into it a prompt and it's going to give me a creative response. But I have to do something with this response first. I have to take it as a generative artifact that may or may not be true. It's usually a confabulation. It's just an idea.
And then I take this idea and modify it. I might build a new prompt that is stepping off this idea and develops it to the next level or put it into something larger. Or I might try to prove whether it's true or make an experiment. And this is what the language models right now are not doing yet. But there's also no technical reason for why they shouldn't be able to do this.
And then I take this idea and modify it. I might build a new prompt that is stepping off this idea and develops it to the next level or put it into something larger. Or I might try to prove whether it's true or make an experiment. And this is what the language models right now are not doing yet. But there's also no technical reason for why they shouldn't be able to do this.
And then I take this idea and modify it. I might build a new prompt that is stepping off this idea and develops it to the next level or put it into something larger. Or I might try to prove whether it's true or make an experiment. And this is what the language models right now are not doing yet. But there's also no technical reason for why they shouldn't be able to do this.
So the way to make a language model coherent is probably not to use reinforcement learning until it only gives you one possible answer that is linking to its source data. But it's using this as a component in a larger system that can also be built by the language model or is enabled by language model structured components or using different technologies.
So the way to make a language model coherent is probably not to use reinforcement learning until it only gives you one possible answer that is linking to its source data. But it's using this as a component in a larger system that can also be built by the language model or is enabled by language model structured components or using different technologies.
So the way to make a language model coherent is probably not to use reinforcement learning until it only gives you one possible answer that is linking to its source data. But it's using this as a component in a larger system that can also be built by the language model or is enabled by language model structured components or using different technologies.
I suspect that language models will be an important stepping stone in developing different types of systems. And one thing that is really missing in the form of language models that we have today is real-time world coupling. It's difficult to do perception with a language model and motor control with a language model. Instead, you would need to have different type of thing that is...
I suspect that language models will be an important stepping stone in developing different types of systems. And one thing that is really missing in the form of language models that we have today is real-time world coupling. It's difficult to do perception with a language model and motor control with a language model. Instead, you would need to have different type of thing that is...
I suspect that language models will be an important stepping stone in developing different types of systems. And one thing that is really missing in the form of language models that we have today is real-time world coupling. It's difficult to do perception with a language model and motor control with a language model. Instead, you would need to have different type of thing that is...
working with it. Also, the language model is a little bit obscuring what its actual functionality is. Some people associate the structure of the neural network of the language model with the nervous system, and I think that's the wrong intuition. Neural networks are unlike nervous systems. They are more like
working with it. Also, the language model is a little bit obscuring what its actual functionality is. Some people associate the structure of the neural network of the language model with the nervous system, and I think that's the wrong intuition. Neural networks are unlike nervous systems. They are more like
working with it. Also, the language model is a little bit obscuring what its actual functionality is. Some people associate the structure of the neural network of the language model with the nervous system, and I think that's the wrong intuition. Neural networks are unlike nervous systems. They are more like
100-step functions that use differentiable linear algebra to approximate the correlation between adjacent brain states. It's basically a function that moves the system from one representational state to the next representational state.
100-step functions that use differentiable linear algebra to approximate the correlation between adjacent brain states. It's basically a function that moves the system from one representational state to the next representational state.
100-step functions that use differentiable linear algebra to approximate the correlation between adjacent brain states. It's basically a function that moves the system from one representational state to the next representational state.
If you try to map this into a metaphor that is closer to our brain, imagine that you would take a language model or a model like DALL-E, that you use, for instance, image-guided diffusion to approximate a camera image and use the activation state of the neural network to interpret the camera image, which in principle I think will be possible very soon. You do this periodically.
If you try to map this into a metaphor that is closer to our brain, imagine that you would take a language model or a model like DALL-E, that you use, for instance, image-guided diffusion to approximate a camera image and use the activation state of the neural network to interpret the camera image, which in principle I think will be possible very soon. You do this periodically.