Menu
Sign In Search Podcasts Charts People & Topics Add Podcast API Blog Pricing

Gerhard Lazu

๐Ÿ‘ค Speaker
1554 total appearances

Appearances Over Time

Podcast Appearances

Anyways, we will clarify this after I mention what I have to say. Wouldn't it be nice if we had a repository for the pipe dream self-contained separate from the application? Whose idea was it?

Anyways, we will clarify this after I mention what I have to say. Wouldn't it be nice if we had a repository for the pipe dream self-contained separate from the application? Whose idea was it?

That's right. So github.com forward slash the changelog forward slash pipe dream is a thing. It even has a first PR. that was adding dynamic backends. And we put it close to the origin, a couple of things so you can go and check it out, PR1. And what do you think about it? Is the repo what you thought it would be?

That's right. So github.com forward slash the changelog forward slash pipe dream is a thing. It even has a first PR. that was adding dynamic backends. And we put it close to the origin, a couple of things so you can go and check it out, PR1. And what do you think about it? Is the repo what you thought it would be?

Well, I think the person whose idea it was should do that. However, I can start. So the idea of the pipe stream was to try and build our own CDN, how we would do it. Single purpose, single tenant, running on fly.io. It's running Varnish Cache, the open source variant. And we just needed like the simplest CDN that we needed. which is, I think, less than 10% of what our current CDN provides.

Well, I think the person whose idea it was should do that. However, I can start. So the idea of the pipe stream was to try and build our own CDN, how we would do it. Single purpose, single tenant, running on fly.io. It's running Varnish Cache, the open source variant. And we just needed like the simplest CDN that we needed. which is, I think, less than 10% of what our current CDN provides.

And the rest is just most of the time in the way. And it complicates things and it makes things a bit more difficult for the simple tasks. How the idea started, I would only quote you again, Jared. Would you like me to quote you again? That was Kaizen 15.

And the rest is just most of the time in the way. And it complicates things and it makes things a bit more difficult for the simple tasks. How the idea started, I would only quote you again, Jared. Would you like me to quote you again? That was Kaizen 15.

I like the idea of having this 20 line varnish config that we deploy around the world. And it's like, look at our CDN guys. It's so simple, and it can do exactly what we want it to do and nothing more. But understand that that's a pipe dream. That's where the name came from.

I like the idea of having this 20 line varnish config that we deploy around the world. And it's like, look at our CDN guys. It's so simple, and it can do exactly what we want it to do and nothing more. But understand that that's a pipe dream. That's where the name came from.

Because the varnish config will be slightly longer than 20 lines, and we'd run into all sorts of issues that we end up sinking all kinds of time into. Jared Santo, March 29th, 2024. Change it on with friends, episode 38.

Because the varnish config will be slightly longer than 20 lines, and we'd run into all sorts of issues that we end up sinking all kinds of time into. Jared Santo, March 29th, 2024. Change it on with friends, episode 38.

Yeah, I mean, the first, the initial commit of the repo was basically I extracted what would have become a pull request to the changelog repo. That was initial commit and we ended up with 46 lines of varnish config. The pull request won, which added dynamic backends. And it does something interesting with a cache status header. We end up with 60 lines of varnish config. Why dynamic backends?

Yeah, I mean, the first, the initial commit of the repo was basically I extracted what would have become a pull request to the changelog repo. That was initial commit and we ended up with 46 lines of varnish config. The pull request won, which added dynamic backends. And it does something interesting with a cache status header. We end up with 60 lines of varnish config. Why dynamic backends?

That was an important one because whenever there's a new application deployment, you can't have static backends. The IP will change. Therefore, you need to use the DNS to resolve whatever the domain is pointing to. So that's what the first pull request was. And that's what we did in the second iteration. Now, I captured what I think is a roadmap. It's in the repo.

That was an important one because whenever there's a new application deployment, you can't have static backends. The IP will change. Therefore, you need to use the DNS to resolve whatever the domain is pointing to. So that's what the first pull request was. And that's what we did in the second iteration. Now, I captured what I think is a roadmap. It's in the repo.

And I was going to ask you, what do you think about the idea in terms of what's coming? So the next step would be to add the feeds backend. Why? Because feeds, we are publishing them to Cloudflare R2. So we would need to proxy to that, basically cache those. I think that would be like a good next step.

And I was going to ask you, what do you think about the idea in terms of what's coming? So the next step would be to add the feeds backend. Why? Because feeds, we are publishing them to Cloudflare R2. So we would need to proxy to that, basically cache those. I think that would be like a good next step.

then i'm thinking we should figure out how to send the logs to honeycomb exactly the same as we currently send them so that you know same structure same dashboard same query same slos everything that we have configured in honeycomb would work exactly the same with the new logs from this new cdn Then we need to do implement the purging across all instances.

then i'm thinking we should figure out how to send the logs to honeycomb exactly the same as we currently send them so that you know same structure same dashboard same query same slos everything that we have configured in honeycomb would work exactly the same with the new logs from this new cdn Then we need to do implement the purging across all instances.