Cian Butler
๐ค SpeakerAppearances Over Time
Podcast Appearances
But it's also not a perfect cache because some of our caches are in memory and some of them are memcache.
So things that were in memcache, those were quick.
But if it was in memory cache, unless you hit the exact same node again, that in memory cache is useless.
And like I said, we're running lots of replicas.
So there's no real guarantee on that performance.
Yeah, I'm a big believer that in-memory caches are only good when you can have a small footprint because they're effectively, they build up in that small footprint.
And if you need to have lots of replicas for whatever reason, be that like budgetary or a limiting of like only having one
CPU per map to a process or something like that, you end up with these very disparate caches of different information and your load kind of ends up going all over the place.
Yes, you would think that.
But the issue isn't that we were... It was...
it's not that we have one caching mechanism, it's that we have different caching mechanisms.
So we were using the Python caching library for in-memory cache.
And then we were using our memcache with our database to cache responses from the database.
So these are actually two different caches.
The memcache one is just...
could we stop ourselves from going to database?
And we would totally check that on every request.
So if we had done a very expensive DB query, it should be in that memcache.
So on the retry, it would come from the memcache.
What wasn't being cached were those pure functions we were running inside the monolith that were in the Python cache.