Joscha Bach
π€ SpeakerAppearances Over Time
Podcast Appearances
Yes, that's a very interesting question. I think that these models are clearly making some inference. But if you give them a reasoning task, it's often difficult for the experimenters to figure out whether the reasoning is the result of the emulation of the reasoning strategy that they saw in human written text, or whether it's something that the system was able to infer by itself.
Yes, that's a very interesting question. I think that these models are clearly making some inference. But if you give them a reasoning task, it's often difficult for the experimenters to figure out whether the reasoning is the result of the emulation of the reasoning strategy that they saw in human written text, or whether it's something that the system was able to infer by itself.
Yes, that's a very interesting question. I think that these models are clearly making some inference. But if you give them a reasoning task, it's often difficult for the experimenters to figure out whether the reasoning is the result of the emulation of the reasoning strategy that they saw in human written text, or whether it's something that the system was able to infer by itself.
On the other hand, if you think of human reasoning, if you want to become a very good reasoner, you don't do this by just figuring out yourself. You read about reasoning. And the first people who tried to write about reasoning and reflect on it didn't get it right.
On the other hand, if you think of human reasoning, if you want to become a very good reasoner, you don't do this by just figuring out yourself. You read about reasoning. And the first people who tried to write about reasoning and reflect on it didn't get it right.
On the other hand, if you think of human reasoning, if you want to become a very good reasoner, you don't do this by just figuring out yourself. You read about reasoning. And the first people who tried to write about reasoning and reflect on it didn't get it right.
Even Aristotle, who thought about this very hard and came up with a theory of how syllogisms work and syllogistic reasoning, has mistakes in his attempt to build something like a formal logic and gets maybe 80% right. And the people that are talking about reasoning professionally today read Tarski and Frege and built on their work. So in many ways, people, when they perform reasoning,
Even Aristotle, who thought about this very hard and came up with a theory of how syllogisms work and syllogistic reasoning, has mistakes in his attempt to build something like a formal logic and gets maybe 80% right. And the people that are talking about reasoning professionally today read Tarski and Frege and built on their work. So in many ways, people, when they perform reasoning,
Even Aristotle, who thought about this very hard and came up with a theory of how syllogisms work and syllogistic reasoning, has mistakes in his attempt to build something like a formal logic and gets maybe 80% right. And the people that are talking about reasoning professionally today read Tarski and Frege and built on their work. So in many ways, people, when they perform reasoning,
are emulating what other people wrote about reasoning. So it's difficult to really draw this boundary. And when FranΓ§ois Chollet says that these models are only interpolating between what they saw and what other people are doing, well, if you give them all the latent dimensions that can be extracted from the internet, what's missing? Maybe there is almost everything there.
are emulating what other people wrote about reasoning. So it's difficult to really draw this boundary. And when FranΓ§ois Chollet says that these models are only interpolating between what they saw and what other people are doing, well, if you give them all the latent dimensions that can be extracted from the internet, what's missing? Maybe there is almost everything there.
are emulating what other people wrote about reasoning. So it's difficult to really draw this boundary. And when FranΓ§ois Chollet says that these models are only interpolating between what they saw and what other people are doing, well, if you give them all the latent dimensions that can be extracted from the internet, what's missing? Maybe there is almost everything there.
And if you're not sufficiently informed by these dimensions and you need more, I think it's not difficult to increase the temperature in the large angles model to the point that is producing stuff that is maybe 90% nonsense and 10% viable and combine this with some prover that is trying to filter out the viable parts from the nonsense in the same way as our own thinking works, right?
And if you're not sufficiently informed by these dimensions and you need more, I think it's not difficult to increase the temperature in the large angles model to the point that is producing stuff that is maybe 90% nonsense and 10% viable and combine this with some prover that is trying to filter out the viable parts from the nonsense in the same way as our own thinking works, right?
And if you're not sufficiently informed by these dimensions and you need more, I think it's not difficult to increase the temperature in the large angles model to the point that is producing stuff that is maybe 90% nonsense and 10% viable and combine this with some prover that is trying to filter out the viable parts from the nonsense in the same way as our own thinking works, right?
When we're very creative, we increase the temperature in our own mind and recreate hypothetical universes and solutions, most of which will not work. And then we test. And we test by building a core that is internally coherent.
When we're very creative, we increase the temperature in our own mind and recreate hypothetical universes and solutions, most of which will not work. And then we test. And we test by building a core that is internally coherent.
When we're very creative, we increase the temperature in our own mind and recreate hypothetical universes and solutions, most of which will not work. And then we test. And we test by building a core that is internally coherent.
And we use reasoning strategies that use some axiomatic consistency by which we can identify those strategies and thoughts and sub-universes that are viable and that can expand our thinking. So if you look at the language models, they have clear limitations right now. One of them is they're not coupled to the world in real time in the way in which our nervous systems are.
And we use reasoning strategies that use some axiomatic consistency by which we can identify those strategies and thoughts and sub-universes that are viable and that can expand our thinking. So if you look at the language models, they have clear limitations right now. One of them is they're not coupled to the world in real time in the way in which our nervous systems are.