Jeff Kao
๐ค SpeakerAppearances Over Time
Podcast Appearances
So I will say we do make use of this geo library from Rust, which is very popular.
And one of the top issues is that it panics and doesn't handle errors, which is like on Rust-like, I guess.
We have this service called Sentry that we use for error logging.
And there's essentially no exceptions normally, apart from that library.
But typically, we don't see stack traces.
And that's just due to spikes or essentially queries of death.
And there's a whole podcast to talk about reliability and things like that.
But the service itself is pretty efficient.
And our rewrite of the search indexes is going to make the service even faster.
And as I mentioned, with the storage and compute separation, that's going to introduce another level of scalability and being able to handle spikes in a way that...
Generally, I would say if we're looking at the proportion, and really there's two main backend services.
There's our API server and then HorizonDB.
I would say there are not significant amount of outages from that.
So at a steady state, we're looking at about 18,000 queries per second.
it's a workload yeah it's i would say it's like it's a lot maybe it's like at other companies it's not like super high in some because like there are some cases where like things will fan out and suddenly you get like hundreds of thousands of tps but yeah it's it's enough to know that like if you made like a certain optimization like you can quickly tell if things went right or went horribly wrong how many nodes do you need to handle that traffic