Fei Fei Li
👤 PersonAppearances Over Time
Podcast Appearances
But once they start to interface with the world, the impact is not necessarily neutral at all. And there is so much humanness in everything we do in technology. And how do we connect that? I decided to talk about that with the interns.
But once they start to interface with the world, the impact is not necessarily neutral at all. And there is so much humanness in everything we do in technology. And how do we connect that? I decided to talk about that with the interns.
But once they start to interface with the world, the impact is not necessarily neutral at all. And there is so much humanness in everything we do in technology. And how do we connect that? I decided to talk about that with the interns.
Yeah, it was around that time, 2018 March, I published the New York Times op-ed. I laid out my vision for human-centered AI.
Yeah, it was around that time, 2018 March, I published the New York Times op-ed. I laid out my vision for human-centered AI.
Yeah, it was around that time, 2018 March, I published the New York Times op-ed. I laid out my vision for human-centered AI.
My overarching thesis is that we must center the value of technologies, development, deployment, and governance around people. Any technology, AI or any other technology, should be human-centered. As I always say, that there's no independent machine values. Machine values are human values. Or there's nothing artificial about artificial intelligence. So it's deeply human.
My overarching thesis is that we must center the value of technologies, development, deployment, and governance around people. Any technology, AI or any other technology, should be human-centered. As I always say, that there's no independent machine values. Machine values are human values. Or there's nothing artificial about artificial intelligence. So it's deeply human.
My overarching thesis is that we must center the value of technologies, development, deployment, and governance around people. Any technology, AI or any other technology, should be human-centered. As I always say, that there's no independent machine values. Machine values are human values. Or there's nothing artificial about artificial intelligence. So it's deeply human.
So human-centered AI should be a framework, and that framework could be applied in fundamental research and education. That's what Stanford does. or creating business and products, as Google and many other companies do, or in the legislation and governance of AI, which is what governments do. So that framework can be translated into multiple ways.
So human-centered AI should be a framework, and that framework could be applied in fundamental research and education. That's what Stanford does. or creating business and products, as Google and many other companies do, or in the legislation and governance of AI, which is what governments do. So that framework can be translated into multiple ways.
So human-centered AI should be a framework, and that framework could be applied in fundamental research and education. That's what Stanford does. or creating business and products, as Google and many other companies do, or in the legislation and governance of AI, which is what governments do. So that framework can be translated into multiple ways.
Fundamentally, it is to put humans' dignity, humans' well-being, and the value that a society care about into both how you create AI or how you create AI products and services or how you govern AI. So concrete examples, let me start from the very basic size upstream. At Stanford, we created this Human-Centered AI Institute. We try to encourage cross-pollinating interdisciplinary research
Fundamentally, it is to put humans' dignity, humans' well-being, and the value that a society care about into both how you create AI or how you create AI products and services or how you govern AI. So concrete examples, let me start from the very basic size upstream. At Stanford, we created this Human-Centered AI Institute. We try to encourage cross-pollinating interdisciplinary research
Fundamentally, it is to put humans' dignity, humans' well-being, and the value that a society care about into both how you create AI or how you create AI products and services or how you govern AI. So concrete examples, let me start from the very basic size upstream. At Stanford, we created this Human-Centered AI Institute. We try to encourage cross-pollinating interdisciplinary research
study and research and teaching about different aspects of AI, like AI for drug discovery, AI for developmental studies, or AI for economics and all that. But we also need to keep in mind, we need to do this with the kind of norm that reflect our values. So we have actually a review process of our grants.
study and research and teaching about different aspects of AI, like AI for drug discovery, AI for developmental studies, or AI for economics and all that. But we also need to keep in mind, we need to do this with the kind of norm that reflect our values. So we have actually a review process of our grants.
study and research and teaching about different aspects of AI, like AI for drug discovery, AI for developmental studies, or AI for economics and all that. But we also need to keep in mind, we need to do this with the kind of norm that reflect our values. So we have actually a review process of our grants.
We call it ethics and society review process, where even when researchers are proposing a research idea to receive funding from HAI, they have to go through a study or a review about what is the social implication? What is the ethical framework?
We call it ethics and society review process, where even when researchers are proposing a research idea to receive funding from HAI, they have to go through a study or a review about what is the social implication? What is the ethical framework?