John List
👤 PersonAppearances Over Time
Podcast Appearances
The main problem is we just don't understand the science of scaling.
The main problem is we just don't understand the science of scaling.
I do think it's a crisis in that if we don't take care of it as scientists, I think everything we do can be undermined in the eyes of the policymaker and the broader public. We don't understand how to use our own science to make better policies.
I do think it's a crisis in that if we don't take care of it as scientists, I think everything we do can be undermined in the eyes of the policymaker and the broader public. We don't understand how to use our own science to make better policies.
I do think it's a crisis in that if we don't take care of it as scientists, I think everything we do can be undermined in the eyes of the policymaker and the broader public. We don't understand how to use our own science to make better policies.
So Dana and I met back in 2012. And we were introduced by a mutual friend. And we did the usual ignore each other for a few years because we're too busy. And push came to shove. Dana and I started to work on early childhood research. And after that, research turned to love.
So Dana and I met back in 2012. And we were introduced by a mutual friend. And we did the usual ignore each other for a few years because we're too busy. And push came to shove. Dana and I started to work on early childhood research. And after that, research turned to love.
So Dana and I met back in 2012. And we were introduced by a mutual friend. And we did the usual ignore each other for a few years because we're too busy. And push came to shove. Dana and I started to work on early childhood research. And after that, research turned to love.
You can kind of put what we've learned into three general buckets that seem to encompass the failures. Bucket number one is that the evidence was just not there to justify scaling the program in the first place. The Department of Education did this broad survey on prevention programs attempting to attenuate youth substance and crime and aspects like that.
You can kind of put what we've learned into three general buckets that seem to encompass the failures. Bucket number one is that the evidence was just not there to justify scaling the program in the first place. The Department of Education did this broad survey on prevention programs attempting to attenuate youth substance and crime and aspects like that.
You can kind of put what we've learned into three general buckets that seem to encompass the failures. Bucket number one is that the evidence was just not there to justify scaling the program in the first place. The Department of Education did this broad survey on prevention programs attempting to attenuate youth substance and crime and aspects like that.
And what they found is that only 8% of those programs were actually backed by research evidence. Many programs that we put in place really don't have the research findings to support them. And this is what a scientist would call a false positive. So are we talking about bad research? Are we talking about cherry picking? Are we talking about publication bias?
And what they found is that only 8% of those programs were actually backed by research evidence. Many programs that we put in place really don't have the research findings to support them. And this is what a scientist would call a false positive. So are we talking about bad research? Are we talking about cherry picking? Are we talking about publication bias?
And what they found is that only 8% of those programs were actually backed by research evidence. Many programs that we put in place really don't have the research findings to support them. And this is what a scientist would call a false positive. So are we talking about bad research? Are we talking about cherry picking? Are we talking about publication bias?
So here we're talking about none of those. We're talking about a small-scale research finding that was the truth in that finding But because of the mechanics of statistical inference, and it just won't be right, what you were getting into is what I would call the second bucket of why things fail, and that's what I call the wrong people were studied.
So here we're talking about none of those. We're talking about a small-scale research finding that was the truth in that finding But because of the mechanics of statistical inference, and it just won't be right, what you were getting into is what I would call the second bucket of why things fail, and that's what I call the wrong people were studied.
So here we're talking about none of those. We're talking about a small-scale research finding that was the truth in that finding But because of the mechanics of statistical inference, and it just won't be right, what you were getting into is what I would call the second bucket of why things fail, and that's what I call the wrong people were studied.
You know, these are studies that have a particular sample of people that shows really large program effect sizes. But when your program is gone to general populations, that effect disappears. So essentially, we were looking at the wrong people and scaling to the wrong people.
You know, these are studies that have a particular sample of people that shows really large program effect sizes. But when your program is gone to general populations, that effect disappears. So essentially, we were looking at the wrong people and scaling to the wrong people.
You know, these are studies that have a particular sample of people that shows really large program effect sizes. But when your program is gone to general populations, that effect disappears. So essentially, we were looking at the wrong people and scaling to the wrong people.