David Kyle Johnson
๐ค SpeakerAppearances Over Time
Podcast Appearances
And just that aside, it's not a test for consciousness, right?
Something can pass the Turing test just like somebody could beat a chess master at chess without being conscious.
There's a long history of people thinking that you need consciousness to do some intellectual task and then narrow AIs blowing away that task without being conscious.
And I think passing the Turing test is just the latest in this long line of examples, right?
But then he goes on to discuss his conversations with Claude, the specific one that he's talking about, and marveling at how intelligent and thoughtful it is and saying, how is this not conscious?
To him, it's like this is so blatantly conscious with all the examples that he gives.
Right.
And seems to be especially impressed because he asks it deep philosophical questions, and it gives deep philosophical answers to those questions.
Right, exactly.
But I actually think that's the exact opposite kind of question that you should be asking it.
if you're trying to test if it's conscious.
In a way, he's almost trying to, he's looking for evidence to support the conclusion rather than to challenge the conclusion, which is the opposite of what you should do.
In fact, I think that sounding like you're giving a deep answer to a philosophical question is kind of the low-hanging fruit for LLMs because it's all just mimicking words and it's really easy to sound profound.
when you're dealing with big concepts, right?
Big concepts are not the way to challenge whether or not an LLM is actually thinking.
Narrow technical ones are.
And when you do that, and Jay and I have talked about this many times because we use different versions of LLMs for different specific reasons, they're very fragile, right?
But he never asks it the kind of question that would expose its fragility, right?
Jay, when I wrote about this, I used an example that you pointed me to, actually.
Yeah.