Mike Hudack
👤 PersonAppearances Over Time
Podcast Appearances
Like you have to have deep, deep respect for the people that you compete against and they are smart and good and they are trying to do the same thing that you are. And I think that that's like the first law of competing with anyone.
Like you have to have deep, deep respect for the people that you compete against and they are smart and good and they are trying to do the same thing that you are. And I think that that's like the first law of competing with anyone.
Yeah. You know, Deliveroo has one product, really, which is food. And then everything else is kind of a feature or a way of delivering that. So it's a weird way of thinking about it.
Yeah. You know, Deliveroo has one product, really, which is food. And then everything else is kind of a feature or a way of delivering that. So it's a weird way of thinking about it.
Yeah. You know, Deliveroo has one product, really, which is food. And then everything else is kind of a feature or a way of delivering that. So it's a weird way of thinking about it.
What is that? For sure. For sure. Well, the key is that you ship all of those things and carefully designed and controlled experiments. where you have a set of metrics that you're looking to improve. So you can imagine, for example, that driver chat is designed to reduce what's called rider experience time, the time that it takes a rider to get from the restaurant to handing you the order.
What is that? For sure. For sure. Well, the key is that you ship all of those things and carefully designed and controlled experiments. where you have a set of metrics that you're looking to improve. So you can imagine, for example, that driver chat is designed to reduce what's called rider experience time, the time that it takes a rider to get from the restaurant to handing you the order.
What is that? For sure. For sure. Well, the key is that you ship all of those things and carefully designed and controlled experiments. where you have a set of metrics that you're looking to improve. So you can imagine, for example, that driver chat is designed to reduce what's called rider experience time, the time that it takes a rider to get from the restaurant to handing you the order.
And there's often, you know, a couple of minutes at the end of RET where they're looking for, you know, looking for your apartment, you know, fumbling around or whatever. And you're sitting there being like, God, if the guy just hits 2B, you know, I'll have my food now. Do I really have to go outside? Whatever.
And there's often, you know, a couple of minutes at the end of RET where they're looking for, you know, looking for your apartment, you know, fumbling around or whatever. And you're sitting there being like, God, if the guy just hits 2B, you know, I'll have my food now. Do I really have to go outside? Whatever.
And there's often, you know, a couple of minutes at the end of RET where they're looking for, you know, looking for your apartment, you know, fumbling around or whatever. And you're sitting there being like, God, if the guy just hits 2B, you know, I'll have my food now. Do I really have to go outside? Whatever.
And so you can design an experiment which very clearly shows whether or not RET, like Writer Experience Time, decreases by the amount that you expect it to in the population that has messaging. What you do is you give messaging to, I don't know, somewhere between 20 and 50% of the users. And then you just look at the delta in RET between those two.
And so you can design an experiment which very clearly shows whether or not RET, like Writer Experience Time, decreases by the amount that you expect it to in the population that has messaging. What you do is you give messaging to, I don't know, somewhere between 20 and 50% of the users. And then you just look at the delta in RET between those two.
And so you can design an experiment which very clearly shows whether or not RET, like Writer Experience Time, decreases by the amount that you expect it to in the population that has messaging. What you do is you give messaging to, I don't know, somewhere between 20 and 50% of the users. And then you just look at the delta in RET between those two.
And you can do power analysis so that you know how big the experiment needs to be, how long it has to run. And then at the end of that, as long as there were no execution problems with the feature, and as long as you designed the experiment correctly, you're going to get an answer.
And you can do power analysis so that you know how big the experiment needs to be, how long it has to run. And then at the end of that, as long as there were no execution problems with the feature, and as long as you designed the experiment correctly, you're going to get an answer.
And you can do power analysis so that you know how big the experiment needs to be, how long it has to run. And then at the end of that, as long as there were no execution problems with the feature, and as long as you designed the experiment correctly, you're going to get an answer.
The answer might be that there's no statistically significant impact, in which case you should unship the thing, or maybe there's something wrong with it. It may increase RET, which would be a paradoxical outcome, and you then need to figure out why, or it may decrease RET, in which case you roll out the feature to 100%.
The answer might be that there's no statistically significant impact, in which case you should unship the thing, or maybe there's something wrong with it. It may increase RET, which would be a paradoxical outcome, and you then need to figure out why, or it may decrease RET, in which case you roll out the feature to 100%.
The answer might be that there's no statistically significant impact, in which case you should unship the thing, or maybe there's something wrong with it. It may increase RET, which would be a paradoxical outcome, and you then need to figure out why, or it may decrease RET, in which case you roll out the feature to 100%.