Regina Barber
๐ค SpeakerAppearances Over Time
Podcast Appearances
Right, right. Okay, so the three places errors could come from is like, one, the model itself, two, the way it's trained, right? And three, the data or the lack of data that it's trained on.
Right, right. Okay, so the three places errors could come from is like, one, the model itself, two, the way it's trained, right? And three, the data or the lack of data that it's trained on.
Let's talk about how those errors build. What happens when they start to build upon each other? Can you describe that outcome to me?
Let's talk about how those errors build. What happens when they start to build upon each other? Can you describe that outcome to me?
Let's talk about how those errors build. What happens when they start to build upon each other? Can you describe that outcome to me?
Okay, so over time you start to lose the more unique occurrences and all the data starts to look more similar to the average.
Okay, so over time you start to lose the more unique occurrences and all the data starts to look more similar to the average.
Okay, so over time you start to lose the more unique occurrences and all the data starts to look more similar to the average.
So instead of this bell curve, you just have like a point in the middle. You just have a whole bunch of stuff in the middle.
So instead of this bell curve, you just have like a point in the middle. You just have a whole bunch of stuff in the middle.
So instead of this bell curve, you just have like a point in the middle. You just have a whole bunch of stuff in the middle.
I know this isn't exactly the same, but it makes me think of the telephone game. You know, when you tell somebody a phrase or a couple sentences, and then the next person tells you.
I know this isn't exactly the same, but it makes me think of the telephone game. You know, when you tell somebody a phrase or a couple sentences, and then the next person tells you.
I know this isn't exactly the same, but it makes me think of the telephone game. You know, when you tell somebody a phrase or a couple sentences, and then the next person tells you.
I mean, I'm looking at some of the like image output of these models that are trained on their own data right now. And we'll link these images in the show notes. But I'm looking at like somebody's handwriting of like zero to nine. And, you know, it's not perfect. It's handwriting. But like as it gets regenerated by the models over and over like 15 times, they're just dots, right?
I mean, I'm looking at some of the like image output of these models that are trained on their own data right now. And we'll link these images in the show notes. But I'm looking at like somebody's handwriting of like zero to nine. And, you know, it's not perfect. It's handwriting. But like as it gets regenerated by the models over and over like 15 times, they're just dots, right?
I mean, I'm looking at some of the like image output of these models that are trained on their own data right now. And we'll link these images in the show notes. But I'm looking at like somebody's handwriting of like zero to nine. And, you know, it's not perfect. It's handwriting. But like as it gets regenerated by the models over and over like 15 times, they're just dots, right?
Like they're not distinguishable. You can't even tell their numbers, like which one is which.
Like they're not distinguishable. You can't even tell their numbers, like which one is which.
Like they're not distinguishable. You can't even tell their numbers, like which one is which.