Jeff Baxter
๐ค SpeakerAppearances Over Time
Podcast Appearances
It's just going to continue to add on to the ways that we can shave off more duplicative data in the system and return more space back to our customers, right?
That's the whole goal for all of our storage efficiency technologies, and this is just a, albeit very important, this is just another cog that goes seamlessly into that entire process that works automatically to basically reduce duplicative data across the system.
You know, one more thing while we're talking about the 800 terabyte aggregates, now being able to do inline dedupe across that whole thing.
Let me be really candid, right?
This whole concept of global dedupe,
has always been kind of one of those red herrings or gets checked off on RFPs or different things like that.
One of the challenges always with global dedupe is it required you to manage so much metadata that we've actually seen some issues potentially expanding over that.
So global dedupe is great.
Doing it the way we do it is great as well.
So the one thing I'll say is if you look at having 800 terabyte aggregates and getting something like a four to one or five to one on it, so you're getting four petabytes of logical effective,
It's kind of interesting because when I go out and look at our major competitors who claim these incredibly large dedupe pools, that 800 terabyte aggregate is bigger than their largest system.
So from that perspective, what I would argue, and that's always going to change, right?
There's going to be a leapfrog, a tit-for-tat who's got the largest, you know, system, who's got the largest dedupe volume.
But for people who are concerned saying, I really want to be able to dedupe across everything and so forth, you can now dedupe across a larger boundary, across more
actual data and logical extended data on an ONTAP 9.2 system than you can do, I think, against any of our major competitors because our system, even a single aggregate, is now bigger than the biggest size they have on their system.
And then in a lot of cases, they don't even have scale out to go any larger.
So, you know, getting into fights about maximums is always, I feel, a little bit silly.
It doesn't focus on customer value, but I think our customers can rest assured that if they want to have
you know, multiple petabytes of data that DDoS is running across, we're now able to do that for them.
Yeah, being able to do a million IOPs only matters if you need 999,999 IOPs right up till then.