Noam Shazeer
๐ค SpeakerAppearances Over Time
Podcast Appearances
Some things we think are really interesting, but important for improving our products, we'll get them out into our products and then make a decision, you know, did we publish this or do we...
give kind of a lightweight, you know, discussion of it, but maybe not every last detail.
And then other things I think we publish openly and try to advance the field and the community because that's how we all kind of benefit from, you know, participating.
You know, I think it's great to go to conferences like NeurIPS last week with like 15,000 people, you know, all sharing lots and lots of great ideas.
And, you know, we published a lot of papers there as we have in the past.
And, you know, see the field advance is super exciting.
As we say around the micro kitchen, such a good model, such a good model.
I mean, I think...
Yeah, we've been working on language models for a long time.
You know, Nome's early work on spelling correction in 2001, the work on translation, very large-scale language models in 2007, and Seek2Seek and Word2Vec and, you know, more recently Transformers, and then BERT and things like the internal MENA system that was actually a chatbot-based system designed to kind of
engage people in interesting conversations.
We actually had an internal chatbot system that Googlers could play with even before ChatDVT came out.
And actually, during the pandemic, a lot of Googlers would enjoy everyone's lockdown at home.
And so they'd enjoy spending time chatting with Mina during lunch because it was like a nice and big lunch partner.
And I think one of the things we were a little, our view of things from a search perspective was like these models hallucinate a lot and they don't get things right correctly a lot of the time or some of the time.
And that means that they aren't as useful as they could be.
And so we'd like to make that better.
And, you know, from a search perspective, you want to get the right answer at 100% of the time, ideally being very high on factuality, and these models were not near that bar.
But I think what we were a little unsure about is that they were incredibly useful.
Oh, and they also had all kinds of safety issues, like they might say offensive things and you had to work on that aspect and get that to a point where we were comfortable releasing the model.