John List
👤 PersonAppearances Over Time
Podcast Appearances
that found huge improvements in both child and parent outcomes in the original study, except when they tried to scale that up into home visits at a much larger scale, what they found is that, for example, home visits for at-risk families involved a lot more distractions in the house and there was less time on child-focused activities. So this is sort of
The wrong dosage or the wrong program is given at scale.
The wrong dosage or the wrong program is given at scale.
The wrong dosage or the wrong program is given at scale.
When you think about the chef, if a restaurant succeeds because of the magical work of the chef, and you think about scaling that, if you can't scale the magic in the chef, that's not scalable. Now, if the magic is because of the mix of ingredients, And the secret sauce, like Domino's, for example, the secret sauce or Papa John's is the actual ingredients, then that will be scalable.
When you think about the chef, if a restaurant succeeds because of the magical work of the chef, and you think about scaling that, if you can't scale the magic in the chef, that's not scalable. Now, if the magic is because of the mix of ingredients, And the secret sauce, like Domino's, for example, the secret sauce or Papa John's is the actual ingredients, then that will be scalable.
When you think about the chef, if a restaurant succeeds because of the magical work of the chef, and you think about scaling that, if you can't scale the magic in the chef, that's not scalable. Now, if the magic is because of the mix of ingredients, And the secret sauce, like Domino's, for example, the secret sauce or Papa John's is the actual ingredients, then that will be scalable.
Now, our proposal is that we do not believe that we should scale a program until you're 95% certain the result is true. So essentially what that means is we need the original research and then three or four well-powered, independent replications of the original findings.
Now, our proposal is that we do not believe that we should scale a program until you're 95% certain the result is true. So essentially what that means is we need the original research and then three or four well-powered, independent replications of the original findings.
Now, our proposal is that we do not believe that we should scale a program until you're 95% certain the result is true. So essentially what that means is we need the original research and then three or four well-powered, independent replications of the original findings.
My intuition is that they're probably not far away from three or four well-powered independent replications. In the hard sciences, in many cases, you not only have the original research, but you have a first replication also published in science. You know, the current credibility crisis in science is a serious one that major results are not replicating.
My intuition is that they're probably not far away from three or four well-powered independent replications. In the hard sciences, in many cases, you not only have the original research, but you have a first replication also published in science. You know, the current credibility crisis in science is a serious one that major results are not replicating.
My intuition is that they're probably not far away from three or four well-powered independent replications. In the hard sciences, in many cases, you not only have the original research, but you have a first replication also published in science. You know, the current credibility crisis in science is a serious one that major results are not replicating.
The reason why is because we weren't serious about replication in the first place. So this sort of puts the onus on policymakers and funding agencies in a sense of saying, we need to change the equilibrium.
The reason why is because we weren't serious about replication in the first place. So this sort of puts the onus on policymakers and funding agencies in a sense of saying, we need to change the equilibrium.
The reason why is because we weren't serious about replication in the first place. So this sort of puts the onus on policymakers and funding agencies in a sense of saying, we need to change the equilibrium.
Well, I think it's sort of a mix. I think it's fair to say that some policymakers are out looking for evidence to to base their preferred program on, what this will do is slow that down. If you have a pet project that you want to get through, fund the replications and let's make sure the science is correct. We think we should actually be rewarding scholars for attempting to replicate.
Well, I think it's sort of a mix. I think it's fair to say that some policymakers are out looking for evidence to to base their preferred program on, what this will do is slow that down. If you have a pet project that you want to get through, fund the replications and let's make sure the science is correct. We think we should actually be rewarding scholars for attempting to replicate.
Well, I think it's sort of a mix. I think it's fair to say that some policymakers are out looking for evidence to to base their preferred program on, what this will do is slow that down. If you have a pet project that you want to get through, fund the replications and let's make sure the science is correct. We think we should actually be rewarding scholars for attempting to replicate.
You know, right now in my community, if I try to replicate someone else, guess what I've just made? I've just made a mortal enemy for life. If you find a publishable result, what result is that? You're refuting previous research. Now I've doubled down on my enemy. So that's like a first step in terms of rewarding scholars who are attempting to replicate.