Andy Halliday
👤 SpeakerAppearances Over Time
Podcast Appearances
I mean, yeah.
That's a big corpus there.
Yeah, I want to weave off of Google, but moving over to the chip space.
So Google has been building tensor processing units, TPUs, not GPUs, that are focused on AI inference and AI training.
Well, they just announced their seventh generation tensor processing unit, which they call Ironwood, named after the band that Jude Simmons and I are in.
It's called Ironwood.
uh anyway no that's not real but actually that part is real and i live on a street so anyway you'll never forget ironwood because ironwood is the name of this new tpu just like uh you know ruben was the name for the the chip set that nvidia moved for they have names for them plus designations well this one is a kind of an nvidia killer so here i want to give you some stats on this thing
So it delivers 4,600 teraflops of performance and has 192 gigabytes of HBM3E memory.
That's I'm sure a very good memory.
So it has a bandwidth of 7.4 terabytes per second.
So this is a massive speed of acceleration for AI.
And importantly,
Unlike the NVIDIA GB300, right, their Blackwell series, they can bang all of these together up to 9,200 accelerators coupled together to create, here's the comparison, to create a data center basically for AI inference and training that can do 42 and a half exaflops
of training and inference, which is far beyond what the GB300 system, the NVL72 system from NVIDIA can do, which is just one third of an exaflop.
So compare 42 and a half exaflops to one third of an exaflop.
So the Blackwell GB300 has just been far surpassed in terms of design for a massive acceleration.
But you got to believe that NVIDIA has got the next generation in the hopper, right?
But anyway, Google is right there and maybe even passing up NVIDIA with their TPUs.
That's crazy.
Was it your bros who gave you that name?