Edward Gibson
๐ค SpeakerAppearances Over Time
Podcast Appearances
Um, and then if you just change that a little bit to the large language model, the large language model, just seen that, that, that explanation so many times that it just, if you change the stories a little bit, but it make it sound like it's the Monty Hall problem, but it's not, you just say, oh, um, There's three doors, and one behind them is a good prize, and there's two bad doors.
I happen to know it's behind door number one. The good prize, the car, is behind door number one. So I'm going to choose door number one. Monty Hall opens door number three and shows me nothing there. Should I trade for door number two? Even though I know the good prize is in door number one.
I happen to know it's behind door number one. The good prize, the car, is behind door number one. So I'm going to choose door number one. Monty Hall opens door number three and shows me nothing there. Should I trade for door number two? Even though I know the good prize is in door number one.
I happen to know it's behind door number one. The good prize, the car, is behind door number one. So I'm going to choose door number one. Monty Hall opens door number three and shows me nothing there. Should I trade for door number two? Even though I know the good prize is in door number one.
And then the large language model will say, yes, you should trade, because it just goes through the forms that it's seen before so many times on these cases. where it, yes, you should trade because your odds have shifted from one in three now to two out of three to being that thing. It doesn't have any way to remember that actually you have 100% probability behind that door number one.
And then the large language model will say, yes, you should trade, because it just goes through the forms that it's seen before so many times on these cases. where it, yes, you should trade because your odds have shifted from one in three now to two out of three to being that thing. It doesn't have any way to remember that actually you have 100% probability behind that door number one.
And then the large language model will say, yes, you should trade, because it just goes through the forms that it's seen before so many times on these cases. where it, yes, you should trade because your odds have shifted from one in three now to two out of three to being that thing. It doesn't have any way to remember that actually you have 100% probability behind that door number one.
You know that. That's not part of the scheme that it's seen hundreds and hundreds of times before. And so you can't, even if you try to explain to it that it's wrong, that they can't do that, it'll just keep giving you back the problems.
You know that. That's not part of the scheme that it's seen hundreds and hundreds of times before. And so you can't, even if you try to explain to it that it's wrong, that they can't do that, it'll just keep giving you back the problems.
You know that. That's not part of the scheme that it's seen hundreds and hundreds of times before. And so you can't, even if you try to explain to it that it's wrong, that they can't do that, it'll just keep giving you back the problems.
I mean, you don't have to convince me of that. I am very, very impressed, but does it do, I mean, you're giving a possible world where maybe someone's gonna train some other version such that it'll be somehow abstracting away from types of forms I mean, I don't think that's happened.
I mean, you don't have to convince me of that. I am very, very impressed, but does it do, I mean, you're giving a possible world where maybe someone's gonna train some other version such that it'll be somehow abstracting away from types of forms I mean, I don't think that's happened.
I mean, you don't have to convince me of that. I am very, very impressed, but does it do, I mean, you're giving a possible world where maybe someone's gonna train some other version such that it'll be somehow abstracting away from types of forms I mean, I don't think that's happened.
That's not the inference. So I don't want to make that, the inference I wouldn't want to make was that inference. The inference I'm trying to push is just that is it like humans here? It's probably not like humans here. It's different. So humans don't make that error. If you explain that to them, they're not going to make that error. They don't make that error.
That's not the inference. So I don't want to make that, the inference I wouldn't want to make was that inference. The inference I'm trying to push is just that is it like humans here? It's probably not like humans here. It's different. So humans don't make that error. If you explain that to them, they're not going to make that error. They don't make that error.
That's not the inference. So I don't want to make that, the inference I wouldn't want to make was that inference. The inference I'm trying to push is just that is it like humans here? It's probably not like humans here. It's different. So humans don't make that error. If you explain that to them, they're not going to make that error. They don't make that error.
And so that's something, it's doing something different from humans that they're doing in that case.
And so that's something, it's doing something different from humans that they're doing in that case.
And so that's something, it's doing something different from humans that they're doing in that case.
I'm just saying the error there is like, if I explain to you there's 100% chance that the car is behind this case, this door, well, do you want to trade? People say no. But this thing will say yes, because it's so, that trick, it's so wound up on the form that it's, that's an error that a human doesn't make, which is kind of interesting.