Menu
Sign In Search Podcasts Charts People & Topics Add Podcast API Blog Pricing

Gerhard Lazu

๐Ÿ‘ค Speaker
1554 total appearances

Appearances Over Time

Podcast Appearances

So when everything would work perfectly, when the operations would be cached, you could get a new deploy within four minutes, between four and five minutes thereabouts. And with this change, what I was aiming for is to do two minutes or less.

So when everything would work perfectly, when the operations would be cached, you could get a new deploy within four minutes, between four and five minutes thereabouts. And with this change, what I was aiming for is to do two minutes or less.

And when I captured, when I ran this, like the initial tests and so on and so forth, we could see that while the first deploy would be slightly slower, because, you know, there was nothing, subsequent deploys would take about two minutes. Two minutes and 15 seconds, the one which I have right here, which is a screenshot on that pull request 522. So how did we accomplish this?

And when I captured, when I ran this, like the initial tests and so on and so forth, we could see that while the first deploy would be slightly slower, because, you know, there was nothing, subsequent deploys would take about two minutes. Two minutes and 15 seconds, the one which I have right here, which is a screenshot on that pull request 522. So how did we accomplish this?

We're using namespace.so, which they provide faster GitHub actions runners, basically faster builds. And we run the engine there. And when... a run starts, we basically restore everything from cache, the namespace cache, which is much, much faster. And we can see up there, basically, per run, we can see how much CPU is being used. We can see how much memory.

We're using namespace.so, which they provide faster GitHub actions runners, basically faster builds. And we run the engine there. And when... a run starts, we basically restore everything from cache, the namespace cache, which is much, much faster. And we can see up there, basically, per run, we can see how much CPU is being used. We can see how much memory.

Again, these are all screenshots on that pull request. And while the first run, obviously, you use quite a bit of CPU because you have to compile all the Elixir into bytecode and all of that, subsequent runs are much, much quicker. And the other thing which I did, I split the, let's see, is it here? It's not actually here. We need to go to Honeycomb to see that.

Again, these are all screenshots on that pull request. And while the first run, obviously, you use quite a bit of CPU because you have to compile all the Elixir into bytecode and all of that, subsequent runs are much, much quicker. And the other thing which I did, I split the, let's see, is it here? It's not actually here. We need to go to Honeycomb to see that.

So I'm going to Honeycomb to look at that. I've split the build time, basically the build, test, and publish from the deploy time because something really interesting is happening there. So let's take, for example, before this change, let's take Dagger on Fly, one of the blue ones, and have a look at the trace. So we have this previous run which actually took 4 minutes and 21 seconds.

So I'm going to Honeycomb to look at that. I've split the build time, basically the build, test, and publish from the deploy time because something really interesting is happening there. So let's take, for example, before this change, let's take Dagger on Fly, one of the blue ones, and have a look at the trace. So we have this previous run which actually took 4 minutes and 21 seconds.

and all of it is like all together it took basically three minutes there's like some time to start the engine to start the machine whatever whatever all in all four minutes and 20 seconds so a newer run for example this one which was fairly fast it was two minutes and a half if we look at the trace we can see that diagram namespace the build test and publish was 54 seconds

and all of it is like all together it took basically three minutes there's like some time to start the engine to start the machine whatever whatever all in all four minutes and 20 seconds so a newer run for example this one which was fairly fast it was two minutes and a half if we look at the trace we can see that diagram namespace the build test and publish was 54 seconds

So in 54 seconds, we went from just getting the code to getting the final artifact, which is a container image that we ship into production. In this case, we basically publish it to GHCR.io. And then the deploy starts. And the deploy took one minute and 16 seconds. So we can see that, you know, like with this split is very clear where the time is spent.

So in 54 seconds, we went from just getting the code to getting the final artifact, which is a container image that we ship into production. In this case, we basically publish it to GHCR.io. And then the deploy starts. And the deploy took one minute and 16 seconds. So we can see that, you know, like with this split is very clear where the time is spent.

And while the build time and the publish time is fairly fast, I mean, less than a minute in this case, the deploy takes a while because we do blue-green, new machines are being promoted, the application has to start, it has to do the health checks. So there's quite a few things which happen behind the scenes that, you know, if you look at it as like one unit, it's difficult to understand.

And while the build time and the publish time is fairly fast, I mean, less than a minute in this case, the deploy takes a while because we do blue-green, new machines are being promoted, the application has to start, it has to do the health checks. So there's quite a few things which happen behind the scenes that, you know, if you look at it as like one unit, it's difficult to understand.

So this was ideal case. This is what I thought would happen. Of course, the last deploys, if I'm just going to filter these dagger on namespace. By the way, we are in Honeycomb. We send all the traces and all the build traces from GitHub Actions to Honeycomb. And you can see how we do that integration in our repo. You can see that we had this one, 2.77 minutes, which is roughly 2.40.

So this was ideal case. This is what I thought would happen. Of course, the last deploys, if I'm just going to filter these dagger on namespace. By the way, we are in Honeycomb. We send all the traces and all the build traces from GitHub Actions to Honeycomb. And you can see how we do that integration in our repo. You can see that we had this one, 2.77 minutes, which is roughly 2.40.

But the next one was surprising, which took nearly five minutes. And if I look at this trace, this was, again, nothing changed. Stuff had to be recompiled. But in this case, the build, test, and publish took nearly three minutes, which this tells me there is some variability into the various runs when it builds it. I don't know why this happens, but I would like to follow up on that.

But the next one was surprising, which took nearly five minutes. And if I look at this trace, this was, again, nothing changed. Stuff had to be recompiled. But in this case, the build, test, and publish took nearly three minutes, which this tells me there is some variability into the various runs when it builds it. I don't know why this happens, but I would like to follow up on that.