Karen Moscow
๐ค SpeakerAppearances Over Time
Podcast Appearances
From Brussels, I'm following the politics, policy and the people shaping the European Union right now.
And from London, I'm looking at what all that means for markets, money and the wider economy.
We've got reporters across Europe and around the globe feeding in as stories break.
So whether it's geopolitics, energy, tech or markets, you're hearing it while it happens.
It's smart, calm and to the point.
You can find new episodes of the Bloomberg Daybreak Europe podcast by 7am in Dublin or 8am in Brussels, Berlin and Paris on Apple, Spotify, YouTube or wherever you get your podcasts.
This is Meta's chip lab in Fremont, California.
Inside, the company is developing the next generations of MTIA, short for Meta Training and Inference Accelerator, its in-house AI chip program.
It's a long-term effort to build the most efficient architecture for Meta's internal workloads, with four new generations of chips planned over the next two years, from ranking and recommendations to large-scale gen AI inference.
When chips come in from the fab, this is where they're validated.
tested at the chip, rack, and workload level before deployment into Meta's data centers.
MTIA 300 is already in production, supporting ranking and recommendations training, helping decide what shows up in your social feed.
Those go into liquid-cooled servers like these.
MTIA 400 is moving towards deployment, expanding into broader AI workloads, including Gen AI.
Future versions 450 and 500 push further into Gen AI inference with deployments planned in 2027.
The effort hasn't always moved as quickly as Mark Zuckerberg and Meta had hoped.
Meta has made some acquisitions.
It has tried to make some others in an effort to strengthen its in-house chip talent and accelerate progress.
AI models are evolving faster than traditional chip cycles, so Meta is speeding up the design process, aiming to improve performance, cost, and power efficiency at scale.
At the same time, the company is striking major supply deals with leading chip makers, securing gigawatts of AI computing capacity.