Jeff Dean
👤 PersonAppearances Over Time
Podcast Appearances
So probably I'm more in the second camp of what we're going to see a lot of acceleration.
As these systems do get more powerful, you have, you know, you got to be more and more careful.
I think the good news, the good news is that analyzing text seems to be easier than generating text.
So I believe that the sort of ability of language models to actually analyze
language model output and figure out what is problematic or dangerous will actually be the solution to a lot of these control issues.
We are definitely working on this stuff.
We've got a bunch of brilliant folks at Google
We're working on this now, and I think it's just going to be more and more important both from a do something good for people standpoint, but also from a business standpoint.
You are a lot of the time limited in what you can deploy based on keeping things safe.
So it becomes very important to be really, really good at that.
Yeah.
I mean, I think we are also going to use these systems a lot to check themselves, check other systems.
You know, it's...
I mean, even as a human, it is easier to recognize something than to generate it.
Yeah, I mean, I think the goal is to empower people.
But, you know, so for the most part, you know, we should be mostly letting people do things with these systems that make sense and, you know, closing off as few parts of the space as we can.
But, you know, yeah, if you let somebody take your thing and create a million evil software engineers, then that doesn't empower people because they're going to...
They're going to hurt others with a million evil software engineers.
So I'm against that.
Early days were super fun.