Jeff Baxter
๐ค SpeakerAppearances Over Time
Podcast Appearances
and they're across multiple different data stores, you're pushing essentially the same patches into every data store.
Being able to inline capture those patches is probably the most obvious and clean example of where it's going to provide traditional, a huge amount of savings, right?
Because we've always done sort of, always, I shouldn't say always, but over the last year or two, we've been able to capture them as they went to each data store.
But then when you move on to data store number two,
Assuming it's in a different volume, right?
You're capturing at least one new iteration of that patch, if not more.
And so on a VDI deployment or even in just standard virtual infrastructure with Windows patching or whatever, we're already starting to see some savings there.
And, you know, I'll be honest, it's one something everyone's been asking for for a long time.
And on the other side, it's just another sort of cog in this overall process for us of
of data efficiency.
So we still do, you know, inline zero removal, right?
We still do inline data at the volume level.
And then we check to see across the aggregate level.
We still do compression.
We still do compaction.
So what you're going to start to see as you work with your NetApp team is we're going to help you size it.
And based upon your workloads, we'll do our sizing and our internal tools.
And we'll look at what your system is able to do.
And we'll say, okay, if you did this in 9.1, you would get, you know,
3.2 to 1, and if you did this with aggregate inline dedupe, you might get 3.9 to 1, just as an example, right?