Charan Ranganath
๐ค SpeakerAppearances Over Time
Podcast Appearances
It's currently under revision. But in our computer model, what we say is that maybe a good way of thinking about this is this conversation that you and I are having. it's associated with a particular context, a particular place in time. And so all of these little cues that are in the background, these little guitar sculptures that you have and that big light umbrella thing, right?
All these things are part of my memory for what we're talking about, the content. So... Now, later on, you're sitting around and you're at home drinking a beer and you think, God, what a strange interview that was, right? So now you're trying to remember it, but the context is different. So your current situation doesn't match up with the memory that you pulled up. There's error.
All these things are part of my memory for what we're talking about, the content. So... Now, later on, you're sitting around and you're at home drinking a beer and you think, God, what a strange interview that was, right? So now you're trying to remember it, but the context is different. So your current situation doesn't match up with the memory that you pulled up. There's error.
All these things are part of my memory for what we're talking about, the content. So... Now, later on, you're sitting around and you're at home drinking a beer and you think, God, what a strange interview that was, right? So now you're trying to remember it, but the context is different. So your current situation doesn't match up with the memory that you pulled up. There's error.
There's a mismatch between what you pulled up and your current context. And so in our model, what you start to do is you start to erase or alter the parts of the memory that are associated with a specific place and time, and you heighten the information about the content. And so if you remember this information in different times and different places,
There's a mismatch between what you pulled up and your current context. And so in our model, what you start to do is you start to erase or alter the parts of the memory that are associated with a specific place and time, and you heighten the information about the content. And so if you remember this information in different times and different places,
There's a mismatch between what you pulled up and your current context. And so in our model, what you start to do is you start to erase or alter the parts of the memory that are associated with a specific place and time, and you heighten the information about the content. And so if you remember this information in different times and different places,
It's more accessible at different times in different places because it's not overfitted in an AI kind of way of thinking about things. It's not overfitted to one particular context. But that's also why the memories that we call upon the most also feel kind of like they're just things that we read about almost. You don't vividly reimagine them.
It's more accessible at different times in different places because it's not overfitted in an AI kind of way of thinking about things. It's not overfitted to one particular context. But that's also why the memories that we call upon the most also feel kind of like they're just things that we read about almost. You don't vividly reimagine them.
It's more accessible at different times in different places because it's not overfitted in an AI kind of way of thinking about things. It's not overfitted to one particular context. But that's also why the memories that we call upon the most also feel kind of like they're just things that we read about almost. You don't vividly reimagine them.
It's like they're just these things that just come to us like facts. And it's a little bit different than semantic memory, but it's like basically these events that we have recalled over and over and over again, we keep updating that memory so it's less and less tied to the original experience.
It's like they're just these things that just come to us like facts. And it's a little bit different than semantic memory, but it's like basically these events that we have recalled over and over and over again, we keep updating that memory so it's less and less tied to the original experience.
It's like they're just these things that just come to us like facts. And it's a little bit different than semantic memory, but it's like basically these events that we have recalled over and over and over again, we keep updating that memory so it's less and less tied to the original experience.
But then we have those other ones, which it's like you just get a reminder of that very specific context. You smell something, you hear a song, you see a place that you haven't been to in a while, and boom, it just comes back to you. And that's the exact opposite of what you get with spacing, right?
But then we have those other ones, which it's like you just get a reminder of that very specific context. You smell something, you hear a song, you see a place that you haven't been to in a while, and boom, it just comes back to you. And that's the exact opposite of what you get with spacing, right?
But then we have those other ones, which it's like you just get a reminder of that very specific context. You smell something, you hear a song, you see a place that you haven't been to in a while, and boom, it just comes back to you. And that's the exact opposite of what you get with spacing, right?
Yeah, but at the same time, it becomes stronger in the sense that the content becomes stronger.
Yeah, but at the same time, it becomes stronger in the sense that the content becomes stronger.
Yeah, but at the same time, it becomes stronger in the sense that the content becomes stronger.
Yeah. And I think this falls into a category. We've done other modeling. One of these is a published study in PLOS, Computational Biology, where we showed that another way, which is, I think, related to the spacing effect is what's called the testing effect. So the idea is that if you're trying to learn words...