Gerhard Lazu
๐ค SpeakerAppearances Over Time
Podcast Appearances
I think that's slightly harder because as we deploy the CDN in like 16 regions, 16 locations, we would need to expire, right? Like when there's an update. So that I think is slightly harder, but not crazy difficult. And then we would need to import all the current edge redirects from our current CDN into the pipe dream. And I think with that, we could try running it in production, I think.
I think that's slightly harder because as we deploy the CDN in like 16 regions, 16 locations, we would need to expire, right? Like when there's an update. So that I think is slightly harder, but not crazy difficult. And then we would need to import all the current edge redirects from our current CDN into the pipe dream. And I think with that, we could try running it in production, I think.
So we can still keep S3, whatever intercepts the logs, right? Because in our current CDN, obviously the CD intercepts all the logs. And then some of those logs, they get sent to S3 indeed. But then all the logs, they get sent to Honeycomb. So you're right, I forgot about the S3 part.
So we can still keep S3, whatever intercepts the logs, right? Because in our current CDN, obviously the CD intercepts all the logs. And then some of those logs, they get sent to S3 indeed. But then all the logs, they get sent to Honeycomb. So you're right, I forgot about the S3 part.
So on top of sending everything to Honeycomb, we would also need to send a subset to S3 exactly as the current config. So yes, that's an extra item that's missing on that roadmap indeed.
So on top of sending everything to Honeycomb, we would also need to send a subset to S3 exactly as the current config. So yes, that's an extra item that's missing on that roadmap indeed.
No idea currently. I mean, based on our architecture and what we have running so that we avoid introducing something new as a new component, a new service that does this, we could potentially do it as a job using OBAN, I think. Because at the end of the day, it's just hitting some endpoints, HTTP endpoints, and it just needs to present a key, right?
No idea currently. I mean, based on our architecture and what we have running so that we avoid introducing something new as a new component, a new service that does this, we could potentially do it as a job using OBAN, I think. Because at the end of the day, it's just hitting some endpoints, HTTP endpoints, and it just needs to present a key, right?
If we don't use it, anyone can expire our cache, which is a default in some CDNs. Yeah, it is. Yeah, we found that out the hard way. Exactly. So that's something that we need. I think an O-band job would make most sense. It's actually pretty straightforward.
If we don't use it, anyone can expire our cache, which is a default in some CDNs. Yeah, it is. Yeah, we found that out the hard way. Exactly. So that's something that we need. I think an O-band job would make most sense. It's actually pretty straightforward.
Yeah. We can get that information by doing a DNS query and it tells us all instances and then we can get all the URLs.
Yeah. We can get that information by doing a DNS query and it tells us all instances and then we can get all the URLs.
So Pipedream is our own CDN, which caches requests going to backends. So imagine that there's a request that needs to hit the app and then the app needs to respond. So the first time, like let's say the home page, once the app does that, subsequent requests, they no longer need to go to the app. Pipedream can just serve because it already has that request cached.
So Pipedream is our own CDN, which caches requests going to backends. So imagine that there's a request that needs to hit the app and then the app needs to respond. So the first time, like let's say the home page, once the app does that, subsequent requests, they no longer need to go to the app. Pipedream can just serve because it already has that request cached.
And then because Pipedream is distributed across the whole world, it can serve from the closest location to the user. To the person. Exactly. And same would be true, for example, for feeds, even though they are stored in Cloudflare R2. The PipeDream instance now goes to Cloudflare R2, gets the feed, and then serves the feed.
And then because Pipedream is distributed across the whole world, it can serve from the closest location to the user. To the person. Exactly. And same would be true, for example, for feeds, even though they are stored in Cloudflare R2. The PipeDream instance now goes to Cloudflare R2, gets the feed, and then serves the feed.
So by default, we're using memory, but using the static backend like a disk backend would be possible, yes.
So by default, we're using memory, but using the static backend like a disk backend would be possible, yes.
I don't know. I quite like the name, to be honest. I think it has a great story behind it, you know? So it just goes back to the origin.
I don't know. I quite like the name, to be honest. I think it has a great story behind it, you know? So it just goes back to the origin.