Joscha Bach
๐ค SpeakerAppearances Over Time
Podcast Appearances
GPT-N, probably. It's not even clear for the present systems. When I talk to my friends at OpenAI, they feel that this question whether the models currently are conscious is much more complicated than many people might think. I guess that it's not that OpenAI has a homogeneous opinion about this. There are some aspects to this.
GPT-N, probably. It's not even clear for the present systems. When I talk to my friends at OpenAI, they feel that this question whether the models currently are conscious is much more complicated than many people might think. I guess that it's not that OpenAI has a homogeneous opinion about this. There are some aspects to this.
One is, of course, this language model has written a lot of text in which people were conscious or described their own consciousness, and it's emulating this. And if it's conscious, it's probably not conscious in a way that is close to the way in which human beings are conscious.
One is, of course, this language model has written a lot of text in which people were conscious or described their own consciousness, and it's emulating this. And if it's conscious, it's probably not conscious in a way that is close to the way in which human beings are conscious.
One is, of course, this language model has written a lot of text in which people were conscious or described their own consciousness, and it's emulating this. And if it's conscious, it's probably not conscious in a way that is close to the way in which human beings are conscious.
But while it is going through these states and going through a 100-step function that is emulating adjacent brain states that require a degree of self-reflection, it can also create a model of an observer that is reflecting itself in real time and describe what that's like. And while this model is a deepfake, our own consciousness is also as if. It's virtual, right? It's not physical.
But while it is going through these states and going through a 100-step function that is emulating adjacent brain states that require a degree of self-reflection, it can also create a model of an observer that is reflecting itself in real time and describe what that's like. And while this model is a deepfake, our own consciousness is also as if. It's virtual, right? It's not physical.
But while it is going through these states and going through a 100-step function that is emulating adjacent brain states that require a degree of self-reflection, it can also create a model of an observer that is reflecting itself in real time and describe what that's like. And while this model is a deepfake, our own consciousness is also as if. It's virtual, right? It's not physical.
Our consciousness is a representation of a self-reflexive observer that only exists in patterns of interaction between cells. So it is not a physical object in the sense that exists in base reality, but it's really a representational object that develops its causal power only from a certain modeling perspective.
Our consciousness is a representation of a self-reflexive observer that only exists in patterns of interaction between cells. So it is not a physical object in the sense that exists in base reality, but it's really a representational object that develops its causal power only from a certain modeling perspective.
Our consciousness is a representation of a self-reflexive observer that only exists in patterns of interaction between cells. So it is not a physical object in the sense that exists in base reality, but it's really a representational object that develops its causal power only from a certain modeling perspective.
Yes. And so to which degree is the virtuality of the consciousness in chat GPT more virtual and less causal than the virtuality of our own consciousness? But you could say it doesn't count. It doesn't count much more than the consciousness of a character in a novel, right?
Yes. And so to which degree is the virtuality of the consciousness in chat GPT more virtual and less causal than the virtuality of our own consciousness? But you could say it doesn't count. It doesn't count much more than the consciousness of a character in a novel, right?
Yes. And so to which degree is the virtuality of the consciousness in chat GPT more virtual and less causal than the virtuality of our own consciousness? But you could say it doesn't count. It doesn't count much more than the consciousness of a character in a novel, right?
It's important for the reader to have the outcome, the artifact of a model is describing in the text generated by the author of the book what it's like to be conscious in a particular situation and performs the necessary inferences. But the task of creating coherence in real time in a self-organizing system by keeping yourself coherent so the system is reflexive
It's important for the reader to have the outcome, the artifact of a model is describing in the text generated by the author of the book what it's like to be conscious in a particular situation and performs the necessary inferences. But the task of creating coherence in real time in a self-organizing system by keeping yourself coherent so the system is reflexive
It's important for the reader to have the outcome, the artifact of a model is describing in the text generated by the author of the book what it's like to be conscious in a particular situation and performs the necessary inferences. But the task of creating coherence in real time in a self-organizing system by keeping yourself coherent so the system is reflexive
That is something that language models don't need to do. So there is no causal need for the system to be conscious in the same way as VR.
That is something that language models don't need to do. So there is no causal need for the system to be conscious in the same way as VR.
That is something that language models don't need to do. So there is no causal need for the system to be conscious in the same way as VR.