Ryan Worrell
๐ค SpeakerAppearances Over Time
Podcast Appearances
There's a lot of different ways that that can happen. But typically, the way that we've experienced is that if you ask an executive at the company that uses Kafka heavily, ask them, is your application latency sensitive? They'll say, of course. We're an extremely high performance organization. We love high performance systems.
There's a lot of different ways that that can happen. But typically, the way that we've experienced is that if you ask an executive at the company that uses Kafka heavily, ask them, is your application latency sensitive? They'll say, of course. We're an extremely high performance organization. We love high performance systems.
Obviously, the intent latency couldn't be anything more than 50 milliseconds. That would be crazy if it were anything more than that. And then you make it a little bit further down the chain in the organization. You ask the application developer or the SRE who's actually on call for the thing or wrote the code. You ask them and they're like, I don't know.
Obviously, the intent latency couldn't be anything more than 50 milliseconds. That would be crazy if it were anything more than that. And then you make it a little bit further down the chain in the organization. You ask the application developer or the SRE who's actually on call for the thing or wrote the code. You ask them and they're like, I don't know.
I hope that it's fast, but I'm not really sure. Or you ask them and you get an explicit answer that's very different than the answer that the executive gave you. Yeah. Realistically, there are a few applications that we come across that do need that low latency.
I hope that it's fast, but I'm not really sure. Or you ask them and you get an explicit answer that's very different than the answer that the executive gave you. Yeah. Realistically, there are a few applications that we come across that do need that low latency.
And the primary example of that, I mean, there's a lot of this kind of application out there in different domains, but the good example that demonstrates it is credit card fraud detection. The way that... There are people out in the real world using credit cards, and you want to make a determination about whether a chart is fraudulent at the point of time that they're swiping the card.
And the primary example of that, I mean, there's a lot of this kind of application out there in different domains, but the good example that demonstrates it is credit card fraud detection. The way that... There are people out in the real world using credit cards, and you want to make a determination about whether a chart is fraudulent at the point of time that they're swiping the card.
So that is necessarily a real-time thing. There's a user who's waiting out in the real world. And if Kafka is in the critical path, especially multiple hops through Kafka in the critical path, then a system that has higher latency, like WarpStream, would be harder to adopt. And there are other applications that meet this criteria.
So that is necessarily a real-time thing. There's a user who's waiting out in the real world. And if Kafka is in the critical path, especially multiple hops through Kafka in the critical path, then a system that has higher latency, like WarpStream, would be harder to adopt. And there are other applications that meet this criteria.
But basically, if the user is in the critical path of the request, then WarpStream is harder to adopt in the abstract. Obviously, some specific applications might be OK with higher latency than others, but that's the one that we see from time to time. When you strip all those out, though, the things that you have left are the more analytical type applications.
But basically, if the user is in the critical path of the request, then WarpStream is harder to adopt in the abstract. Obviously, some specific applications might be OK with higher latency than others, but that's the one that we see from time to time. When you strip all those out, though, the things that you have left are the more analytical type applications.
Like the example I was talking about before, moving application logs around. Developers are pretty used to some delay between the log print statement running inside their application and being searchable inside wherever they're consuming their logs from. So the additional one second of latency there is typically a non-issue.
Like the example I was talking about before, moving application logs around. Developers are pretty used to some delay between the log print statement running inside their application and being searchable inside wherever they're consuming their logs from. So the additional one second of latency there is typically a non-issue.
And the reason why that's useful for us as a company at Workstream is that those workloads are typically really high volume and they cost the user a lot of money. So our solution being more cost effective really resonates with them because usually there's also a curve of, The more data you're generating, the less valuable that data is per byte.
And the reason why that's useful for us as a company at Workstream is that those workloads are typically really high volume and they cost the user a lot of money. So our solution being more cost effective really resonates with them because usually there's also a curve of, The more data you're generating, the less valuable that data is per byte.
So there's like budget pressure to get the efficiency to process that data. You want to increase the efficiency of processing that data and Kafka sticks out like a sore thumb in terms of that. processing cost.
So there's like budget pressure to get the efficiency to process that data. You want to increase the efficiency of processing that data and Kafka sticks out like a sore thumb in terms of that. processing cost.
So we can come in and say, hey, because of the way the cloud providers don't charge you for bandwidth between VMs and object storage, and we store all the data in object storage, that means you're going to save this many hundreds of thousands of dollars a year on sending the dumb application logs that you're generating into the eventual downstream storage, that makes a lot of sense to them.
So we can come in and say, hey, because of the way the cloud providers don't charge you for bandwidth between VMs and object storage, and we store all the data in object storage, that means you're going to save this many hundreds of thousands of dollars a year on sending the dumb application logs that you're generating into the eventual downstream storage, that makes a lot of sense to them.