Glenn Dekheyser
๐ค SpeakerAppearances Over Time
Podcast Appearances
Now you've got your AI data pipeline, which could have, let's say, an AFX cluster and AIID running, right?
Well, I want to manage that data.
Let's say I use a combo or I use a Veeam or another NetApp partner, go back that stuff up and get it to tape for long-term AI archiving.
I can bring it back and automate it back with my AI pipeline.
Who wants to run a tape library anymore?
Nobody, right?
But you still need to use it.
So what we're trying to do is put together these solutions.
I hesitate to call them managed services because our customers for our components will actually be managed service providers who don't want to manage tape libraries physically, but we can do that for them, right?
We're managing the private AI stuff.
We'll manage up to as much that a service provider doesn't want to do.
For instance, managing up to base command or mission control in an NVIDIA cluster.
We'll take all that and manage all that, including if there's liquid cooling involved.
which we haven't discussed at all.
And Equinix's ability to bring liquid cooling into this world with the new Grace Blackwell stuff and beyond and the crazy requirements of Vera Rubin in the future.
But as we start getting these high power stuff, Equinix is ready, but most folks don't know how to manage a liquid cooled environment.
They just don't.
And even a lot of the partners don't know how to do that.
And why should they?
How often do you get to do that?