Arvind Narayanan
👤 PersonAppearances Over Time
Podcast Appearances
We shouldn't put too much stock into benchmarks. We should look at people... We're actually trying to use these in professional context, whether it's lawyers or, you know, really anybody else. And we should go based on their experience of using these AI assistants.
We shouldn't put too much stock into benchmarks. We should look at people... We're actually trying to use these in professional context, whether it's lawyers or, you know, really anybody else. And we should go based on their experience of using these AI assistants.
We shouldn't put too much stock into benchmarks. We should look at people... We're actually trying to use these in professional context, whether it's lawyers or, you know, really anybody else. And we should go based on their experience of using these AI assistants.
So let's talk for a second about what AGI is. Different people mean different things by it and so often talk past each other. The definition that we consider most relevant is AI that is capable of automating most economically valuable tasks. By this definition, you know, of automating most economically valuable tasks, if we did have AGI, that would truly be a profound thing in our society.
So let's talk for a second about what AGI is. Different people mean different things by it and so often talk past each other. The definition that we consider most relevant is AI that is capable of automating most economically valuable tasks. By this definition, you know, of automating most economically valuable tasks, if we did have AGI, that would truly be a profound thing in our society.
So let's talk for a second about what AGI is. Different people mean different things by it and so often talk past each other. The definition that we consider most relevant is AI that is capable of automating most economically valuable tasks. By this definition, you know, of automating most economically valuable tasks, if we did have AGI, that would truly be a profound thing in our society.
So now for the CEO predictions, I think one thing that's helpful to keep in mind is that there have been these predictions of imminent AGI since the earliest days of AI for more than a half century. Alan Turing. When the first computers were built or about to be built, people thought, you know, the two main things we need for AI are hardware and software. We've done the hard part, the hardware.
So now for the CEO predictions, I think one thing that's helpful to keep in mind is that there have been these predictions of imminent AGI since the earliest days of AI for more than a half century. Alan Turing. When the first computers were built or about to be built, people thought, you know, the two main things we need for AI are hardware and software. We've done the hard part, the hardware.
So now for the CEO predictions, I think one thing that's helpful to keep in mind is that there have been these predictions of imminent AGI since the earliest days of AI for more than a half century. Alan Turing. When the first computers were built or about to be built, people thought, you know, the two main things we need for AI are hardware and software. We've done the hard part, the hardware.
Now there's just one thing left, the easy part, the software. But of course, now we know how hard that is. So I think historically what we've seen, it's kind of like climbing a mountain. Wherever you are, it looks like there's just kind of one step to go. But when you climb up a little bit further, the complexity reveals itself. And so we've seen that over and over and over again.
Now there's just one thing left, the easy part, the software. But of course, now we know how hard that is. So I think historically what we've seen, it's kind of like climbing a mountain. Wherever you are, it looks like there's just kind of one step to go. But when you climb up a little bit further, the complexity reveals itself. And so we've seen that over and over and over again.
Now there's just one thing left, the easy part, the software. But of course, now we know how hard that is. So I think historically what we've seen, it's kind of like climbing a mountain. Wherever you are, it looks like there's just kind of one step to go. But when you climb up a little bit further, the complexity reveals itself. And so we've seen that over and over and over again.
Now it's like, oh, you know, we just need to make these bigger and bigger models. So you have some silly projections based on that. But soon the limitations of that started becoming apparent. And now the next layer of complexity reveals itself. So that's my view. I wouldn't put too much stock into these overconfident predictions from CEOs.
Now it's like, oh, you know, we just need to make these bigger and bigger models. So you have some silly projections based on that. But soon the limitations of that started becoming apparent. And now the next layer of complexity reveals itself. So that's my view. I wouldn't put too much stock into these overconfident predictions from CEOs.
Now it's like, oh, you know, we just need to make these bigger and bigger models. So you have some silly projections based on that. But soon the limitations of that started becoming apparent. And now the next layer of complexity reveals itself. So that's my view. I wouldn't put too much stock into these overconfident predictions from CEOs.
I certainly think the balance is possible. To some extent, every big company does this.
I certainly think the balance is possible. To some extent, every big company does this.
I certainly think the balance is possible. To some extent, every big company does this.
That's fair. And I think, you know, it would take a discipline from a management to be able to pull it off in a way that one part of the company doesn't distract another too much. And we've seen this happen with OpenAI, which is the folks focused on superintelligence didn't feel very welcome at the company.
That's fair. And I think, you know, it would take a discipline from a management to be able to pull it off in a way that one part of the company doesn't distract another too much. And we've seen this happen with OpenAI, which is the folks focused on superintelligence didn't feel very welcome at the company.