Noam Shazeer
๐ค SpeakerAppearances Over Time
Podcast Appearances
And I think right now the trends are the models are getting substantially better generation over generation.
And I don't see that slowing down in the next few generations probably.
So that means the models, say, two to three generations from now are going to be capable of, you know, let's go back to the example of breaking down a simple task into 10 sub-pieces and doing it 80% of the time to something that can break down a task, a very high-level task, into 100 or 1,000 pieces and get that right 90% of the time.
That's a major, major step up in what the models are capable of.
So I think it's important for people to understand what is happening in the progress in the field.
And then those models are going to be applied in a bunch of different domains.
And I think it's really good to make sure that we, as society, get the maximal benefits from what these models can do to improve things in... I'm super excited about areas like education and healthcare, making information accessible to all people.
But we also realize that they could be used for misinformation.
They could be used for automated hacking of computer systems.
And we want to sort of put as many safeguards and mitigations and understand the capabilities of the models in place as we can.
And that's kind of โ I think Google as a whole has a really โ
you know, good view to how we should approach this.
You know, our responsible principles actually are a pretty nice framework for how to think about tradeoffs of making, you know, better and better AI systems available in different contexts and settings, while also sort of making sure that we're doing the right thing in terms of, you know, making sure they're safe and, you know, not saying toxic things and things like that.
I mean, one thing I would say is there's, like, the extreme views on either end.
There's, like, oh, my goodness, these systems are going to, like, be so much better than humans at all things, and we're going to be...
kind of overwhelmed.
And then there's the, like, these systems are going to be amazing, and we don't have to worry about them at all.
I think I'm somewhere in the middle, and I'm a co-author on a paper called Shaping AI, which is, you know, those two extreme views often kind of view our role as kind of laissez-faire, like we're just going to have the AI develop in the path that it takes.
And I think there's actually a really...
good argument to be made that what we're going to do is try to shape and steer the, the way in which AI is deployed in the world so that it is, you know, maximally beneficial in the areas that we want to capture and benefit from in, you know, education, you know, some of the areas I mentioned, healthcare, um,