Bret Taylor
👤 PersonAppearances Over Time
Podcast Appearances
are distinct to me. So starting with the step function, I don't think it's a foregone conclusion that we'll have step function changes. I believe the most responsible way to develop AGI is responsible iterative deployment. The reason for that is I believe that as you're thinking about things like the societal impact, access to this technology and the safety side of AGI as well, that the
are distinct to me. So starting with the step function, I don't think it's a foregone conclusion that we'll have step function changes. I believe the most responsible way to develop AGI is responsible iterative deployment. The reason for that is I believe that as you're thinking about things like the societal impact, access to this technology and the safety side of AGI as well, that the
best way we can learn about how to ensure that these models benefit humanity is to consistently release them, learn from those experiences on the safety side, learn about the harm, learn about really specific vulnerabilities like jailbreaking and improve it at every turn. We could end up with a plateau of progress, or as you said, diminishing returns. The three inputs to progress in AI are
best way we can learn about how to ensure that these models benefit humanity is to consistently release them, learn from those experiences on the safety side, learn about the harm, learn about really specific vulnerabilities like jailbreaking and improve it at every turn. We could end up with a plateau of progress, or as you said, diminishing returns. The three inputs to progress in AI are
best way we can learn about how to ensure that these models benefit humanity is to consistently release them, learn from those experiences on the safety side, learn about the harm, learn about really specific vulnerabilities like jailbreaking and improve it at every turn. We could end up with a plateau of progress, or as you said, diminishing returns. The three inputs to progress in AI are
Number one, data. Number two, compute. Number three, algorithms and methodology. So if you look at the short history of sort of this current wave of modern AI, it started, I think, with the Transformers model, which was a paper from Google called Attention is All You Need, which changed the scale with which you could build these models, which led to many of the sort of
Number one, data. Number two, compute. Number three, algorithms and methodology. So if you look at the short history of sort of this current wave of modern AI, it started, I think, with the Transformers model, which was a paper from Google called Attention is All You Need, which changed the scale with which you could build these models, which led to many of the sort of
Number one, data. Number two, compute. Number three, algorithms and methodology. So if you look at the short history of sort of this current wave of modern AI, it started, I think, with the Transformers model, which was a paper from Google called Attention is All You Need, which changed the scale with which you could build these models, which led to many of the sort of
GPT breakthroughs that came next. You ended up with instruction tuning, which was how you turned one of these models into a chat interface, which was a breakthrough as well. Given even existing data, existing compute, we have all of the best minds in computer science thinking about different techniques. It's similar. There's folks even looking beyond the Transformers model and things like that.
GPT breakthroughs that came next. You ended up with instruction tuning, which was how you turned one of these models into a chat interface, which was a breakthrough as well. Given even existing data, existing compute, we have all of the best minds in computer science thinking about different techniques. It's similar. There's folks even looking beyond the Transformers model and things like that.
GPT breakthroughs that came next. You ended up with instruction tuning, which was how you turned one of these models into a chat interface, which was a breakthrough as well. Given even existing data, existing compute, we have all of the best minds in computer science thinking about different techniques. It's similar. There's folks even looking beyond the Transformers model and things like that.
So I think that that's one area where you could have a a big breakthrough. You have compute, just pick up a newspaper and read about the investment in GPUs. And these clusters are getting even bigger and bigger. And even with the same amount of data training and both pre-training and post-training can have a really big impact on quality
So I think that that's one area where you could have a a big breakthrough. You have compute, just pick up a newspaper and read about the investment in GPUs. And these clusters are getting even bigger and bigger. And even with the same amount of data training and both pre-training and post-training can have a really big impact on quality
So I think that that's one area where you could have a a big breakthrough. You have compute, just pick up a newspaper and read about the investment in GPUs. And these clusters are getting even bigger and bigger. And even with the same amount of data training and both pre-training and post-training can have a really big impact on quality
And then on the data side, there's a lot of writing about sort of running out of some of the textual data. But there's a lot of really interesting companies working on simulation. There's a lot of interesting explorations in synthetic data generation. There's multimodality. So, you know, what is true of text is, you know, there's lots of video, audio, image content as well.
And then on the data side, there's a lot of writing about sort of running out of some of the textual data. But there's a lot of really interesting companies working on simulation. There's a lot of interesting explorations in synthetic data generation. There's multimodality. So, you know, what is true of text is, you know, there's lots of video, audio, image content as well.
And then on the data side, there's a lot of writing about sort of running out of some of the textual data. But there's a lot of really interesting companies working on simulation. There's a lot of interesting explorations in synthetic data generation. There's multimodality. So, you know, what is true of text is, you know, there's lots of video, audio, image content as well.
you know, in any one of those, you could probably make a very rational intellectual case that we're going to hit a wall, but then you have the two others. And I don't think you can make the case for all three that they're all coming up on a wall. And I think like any big scientific effort, it will probably be a mix of progress and all of those. And as a consequence, I
you know, in any one of those, you could probably make a very rational intellectual case that we're going to hit a wall, but then you have the two others. And I don't think you can make the case for all three that they're all coming up on a wall. And I think like any big scientific effort, it will probably be a mix of progress and all of those. And as a consequence, I
you know, in any one of those, you could probably make a very rational intellectual case that we're going to hit a wall, but then you have the two others. And I don't think you can make the case for all three that they're all coming up on a wall. And I think like any big scientific effort, it will probably be a mix of progress and all of those. And as a consequence, I