Cian Butler
๐ค SpeakerAppearances Over Time
Podcast Appearances
But I think the thing about scaling microservices is it seems like it's a really easy thing to do.
You can just throw a little service at it and everything works.
It's, I just have this, I call that service and it gives me a response.
And when you're running like one box, talking to another box, that does scale pretty nicely.
And when you have a small bit of traffic, that scales, it scales really nicely because you have a small bit of traffic.
But in the real world, it's never that simple.
You deploy 10 services for your microservice.
It says it's got 10, it's got, you have 10 replicas.
and you have 20 replicas of your other service, let's say, you need to ensure that you're properly load balancing across those 10 services.
You need to account for the network delay in your one service as it waits on the other.
You start running into issues about managing connection pools and blocking IO resources.
This is one of those things that we actually ran into a lot in our monolith.
The way Python blocks...
can be quite problematic because it it doesn't just like go to sleep and pull it could just like sit there and wait and then you just have resources that are blocked waiting for that you need to know how to sleep and how to and pick up more work in the background while you wait on resources to fill up but if you never have to call across that network boundary
If you have all your logic in a monolith, you can avoid the overhead of a network.
You have a much simpler cognitive design that you can account for.
No, and yeah, and this is a problem we're running into.
I keep saying this problem we're running into.