Sundar Pichai
๐ค SpeakerAppearances Over Time
Podcast Appearances
I think people don't internalize that Google is one of the largest enterprise software companies in the world now. And the largest media company. In some ways, right? And definitely, we're doing a podcast. I think we're the largest podcasting service in the world. And so I feel like as a company, we are set up well. For the first time, you have this cross-cutting technology.
I think people don't internalize that Google is one of the largest enterprise software companies in the world now. And the largest media company. In some ways, right? And definitely, we're doing a podcast. I think we're the largest podcasting service in the world. And so I feel like as a company, we are set up well. For the first time, you have this cross-cutting technology.
To our earlier point, thinking of us as a deep computer science company, what better technology than AI, which horizontally can impact all aspects of our business? Search, YouTube, cloud, Waymo, and the other new things we are doing. So it feels like an exciting time. So not a lot of what... You know, we've continued to do well in search. We are doing well in these other businesses.
To our earlier point, thinking of us as a deep computer science company, what better technology than AI, which horizontally can impact all aspects of our business? Search, YouTube, cloud, Waymo, and the other new things we are doing. So it feels like an exciting time. So not a lot of what... You know, we've continued to do well in search. We are doing well in these other businesses.
And so to me, it feels like, you know, one of the biggest opportunities ahead as a company too. I think the next decade ahead looks to me as exciting as the past decade, so.
And so to me, it feels like, you know, one of the biggest opportunities ahead as a company too. I think the next decade ahead looks to me as exciting as the past decade, so.
We can unpack both, right? Like where our CapEx is going, but on your first part, right? Like one of the ways, you know, we look at the Pareto frontier of performance and cost. Google literally is on the predator frontier. So we deliver the best models at the most cost-effective price point, right? Like, you know, and our flash series of models are a real workhorse in the industry, right?
We can unpack both, right? Like where our CapEx is going, but on your first part, right? Like one of the ways, you know, we look at the Pareto frontier of performance and cost. Google literally is on the predator frontier. So we deliver the best models at the most cost-effective price point, right? Like, you know, and our flash series of models are a real workhorse in the industry, right?
And part of why we are able to do that is because, you know, we train and serve our models on our infrastructure, including TPUs, right? And we are in our seventh generation of TPUs, and we built our first version in 2017. I remember talking about it at Google I.O. Probably people didn't pay attention to it because, like, you know, why are you building a specific machine learning X-rated chip?
And part of why we are able to do that is because, you know, we train and serve our models on our infrastructure, including TPUs, right? And we are in our seventh generation of TPUs, and we built our first version in 2017. I remember talking about it at Google I.O. Probably people didn't pay attention to it because, like, you know, why are you building a specific machine learning X-rated chip?
Look, it plays out everywhere. To your earlier question on cost per query in search, The reason we feel comfortable we can serve it at that scale is because we are constantly innovating through each generation, including chips which are really, really good at inference. And Ironwood, which is our latest in our TPU series, a single pot of Ironwood is over 40 exaflops.
Look, it plays out everywhere. To your earlier question on cost per query in search, The reason we feel comfortable we can serve it at that scale is because we are constantly innovating through each generation, including chips which are really, really good at inference. And Ironwood, which is our latest in our TPU series, a single pot of Ironwood is over 40 exaflops.
Right, and so the scale of these things are incredible. And we have thought about our info all the way from subsea cables to the scale at which we do infrastructure is unparalleled. And I've always viewed that full stack approach. deep infrastructure, foundation, fundamental R&D on top of it, and then you build and innovate on top of that.
Right, and so the scale of these things are incredible. And we have thought about our info all the way from subsea cables to the scale at which we do infrastructure is unparalleled. And I've always viewed that full stack approach. deep infrastructure, foundation, fundamental R&D on top of it, and then you build and innovate on top of that.
And I think that approach will serve us well over time, but it really empirically plays out in the cost at which we are able to provide our models. Part of the reason we've had a lot of traction with Gemini 2.5 series is not only are they great models, but we are offering it at a very attractive value. And we can do that because we are driving our infrastructure costs down.
And I think that approach will serve us well over time, but it really empirically plays out in the cost at which we are able to provide our models. Part of the reason we've had a lot of traction with Gemini 2.5 series is not only are they great models, but we are offering it at a very attractive value. And we can do that because we are driving our infrastructure costs down.
On the $75 billion in CapEx for 2025, obviously majority of that goes into servers, data centers, and so on, servers being the vast portion of it. I would say on looking at 2025 and looking at the compute part of the span, Half of that is going towards our cloud business in 2025. And obviously, it's a very different business to search and so on.
On the $75 billion in CapEx for 2025, obviously majority of that goes into servers, data centers, and so on, servers being the vast portion of it. I would say on looking at 2025 and looking at the compute part of the span, Half of that is going towards our cloud business in 2025. And obviously, it's a very different business to search and so on.
So a lot of it is to power the innovations from Google DeepMind, pushing the frontier. And we're doing it across many dimensions, not just large language models, but even there doing it across not just text, images, video, et cetera, building world models, right?
So a lot of it is to power the innovations from Google DeepMind, pushing the frontier. And we're doing it across many dimensions, not just large language models, but even there doing it across not just text, images, video, et cetera, building world models, right?