Sergey Brin
๐ค SpeakerAppearances Over Time
Podcast Appearances
But it could suck down a whole chat space and then answer pretty complicated questions. So I was like, okay, summarize this for me. Okay, now assign something for everyone to work on. And then I would paste it back in so people didn't realize it was the AI. I admitted that pretty soon. And there were a few giveaways here or there. But it worked remarkably well.
But it could suck down a whole chat space and then answer pretty complicated questions. So I was like, okay, summarize this for me. Okay, now assign something for everyone to work on. And then I would paste it back in so people didn't realize it was the AI. I admitted that pretty soon. And there were a few giveaways here or there. But it worked remarkably well.
And then I was like, well, who should be promoted in this chat space? Yeah. And it actually picked out this woman, this young woman engineer who like, you know, I didn't even notice she wasn't very vocal particularly in that group. But her PRs kicked ass.
And then I was like, well, who should be promoted in this chat space? Yeah. And it actually picked out this woman, this young woman engineer who like, you know, I didn't even notice she wasn't very vocal particularly in that group. But her PRs kicked ass.
No, no, it was like, and then, I don't know, something that the AI had detected and I went and I talked to the manager actually and he was like, yeah, you know what, you're right. Like, she's been working really hard, did all these things. Wow. I think that ended up happening actually. So, I don't know, I guess after a while you just kind of take it for granted that you can just do these things.
No, no, it was like, and then, I don't know, something that the AI had detected and I went and I talked to the manager actually and he was like, yeah, you know what, you're right. Like, she's been working really hard, did all these things. Wow. I think that ended up happening actually. So, I don't know, I guess after a while you just kind of take it for granted that you can just do these things.
I don't know, it hasn't really... Do you think that there's a use case for an infinite context link? Oh, 100%. All of Google's code base goes in one day.
I don't know, it hasn't really... Do you think that there's a use case for an infinite context link? Oh, 100%. All of Google's code base goes in one day.
Yeah, I mean, I guess if it knows everything, then you can have just one in theory. You just need to somehow tell it what you're talking about. But yeah, for sure, there's no limit to use of context, and there are a lot of ways to make it larger and larger.
Yeah, I mean, I guess if it knows everything, then you can have just one in theory. You just need to somehow tell it what you're talking about. But yeah, for sure, there's no limit to use of context, and there are a lot of ways to make it larger and larger.
I mean, for any such cool new idea in AI, there are probably five such things internally. And the question is, how well do they work? And yeah, I mean, we're definitely pushing all the bounds in terms of intelligence, in terms of context, in terms of... you name it. And what about the hardware?
I mean, for any such cool new idea in AI, there are probably five such things internally. And the question is, how well do they work? And yeah, I mean, we're definitely pushing all the bounds in terms of intelligence, in terms of context, in terms of... you name it. And what about the hardware?
Well, we mostly, for Gemini, we mostly use our own TPUs. But we also do support NVIDIA and we were one of the big purchasers of NVIDIA chips and we have them in Google Cloud available for our customers in addition to TPUs. At this stage, it's, for better or for worse, not that abstract. And maybe someday the AI will abstract it for us.
Well, we mostly, for Gemini, we mostly use our own TPUs. But we also do support NVIDIA and we were one of the big purchasers of NVIDIA chips and we have them in Google Cloud available for our customers in addition to TPUs. At this stage, it's, for better or for worse, not that abstract. And maybe someday the AI will abstract it for us.
But given just the amount of computation you have to do on these models, you actually have to think pretty carefully how to do everything and exactly what kind of chip you have and how the memory works and the communication works and so forth are actually pretty big factors. And it actually, yeah, maybe one of these days the AI itself will be good enough to reason through that.
But given just the amount of computation you have to do on these models, you actually have to think pretty carefully how to do everything and exactly what kind of chip you have and how the memory works and the communication works and so forth are actually pretty big factors. And it actually, yeah, maybe one of these days the AI itself will be good enough to reason through that.
Today it's not quite good enough.
Today it's not quite good enough.
Everything is getting better and faster. Smaller models are more capable. There are better ways to do inference on them that are faster.
Everything is getting better and faster. Smaller models are more capable. There are better ways to do inference on them that are faster.