Eric Schmidt
đ¤ SpeakerVoice Profile Active
This person's voice can be automatically recognized across podcast episodes using AI voice matching.
Appearances Over Time
Podcast Appearances
We want to preserve that freedom for human agency. I think for most people, having the travel agent be automatic and not having to fiddle with the lights in their room and the computer getting it set up because it's a pain in the ass, excuse my language, having those efficiencies is a good thing. Having all the world's information at your fingertips is a good thing.
We want to preserve that freedom for human agency. I think for most people, having the travel agent be automatic and not having to fiddle with the lights in their room and the computer getting it set up because it's a pain in the ass, excuse my language, having those efficiencies is a good thing. Having all the world's information at your fingertips is a good thing.
But when it ultimately prevents you from having freedom, then it's not such a good thing. And I think people will discover that boundary.
But when it ultimately prevents you from having freedom, then it's not such a good thing. And I think people will discover that boundary.
One of the things that's most interesting about the judicial system right now, it's being used to give you some summaries of outcomes. The best one is if you're on trial, which thankfully neither you nor I are, you basically want to be in the morning because by the end of the afternoon, they're so tired of you that they just give you a harder sentence. Now, how was that discovered?
One of the things that's most interesting about the judicial system right now, it's being used to give you some summaries of outcomes. The best one is if you're on trial, which thankfully neither you nor I are, you basically want to be in the morning because by the end of the afternoon, they're so tired of you that they just give you a harder sentence. Now, how was that discovered?
That was discovered using machine learning. I don't think that computers should be judges. Because I think part of the principle of our democracy is that humans make decisions and they're held accountable. You want to make sure that you have human agency over everything. There's nothing wrong with the computer making a recommendation to the judge. What is wrong is if the judge just listens to it.
That was discovered using machine learning. I don't think that computers should be judges. Because I think part of the principle of our democracy is that humans make decisions and they're held accountable. You want to make sure that you have human agency over everything. There's nothing wrong with the computer making a recommendation to the judge. What is wrong is if the judge just listens to it.
Let me give you an example where this doesn't work. So if it's a judge in a courtroom, it's perfectly fine. There's appeals and the judge makes a mistake and so forth. It gets worked out. I mean, it's painful, but it gets worked out. But here we are, we're on a ship. You're the commander of a ship and the system has detected a hypersonic missile coming towards you with some high probability.
Let me give you an example where this doesn't work. So if it's a judge in a courtroom, it's perfectly fine. There's appeals and the judge makes a mistake and so forth. It gets worked out. I mean, it's painful, but it gets worked out. But here we are, we're on a ship. You're the commander of a ship and the system has detected a hypersonic missile coming towards you with some high probability.
And you have 29 seconds to press the button. And the system recommends pressing the button. 28, 27, 26, how many times do you think the captain of that ship will not press the button? They'll press the button. So that's an example where the system is designed to have human agency, but there's not enough time. So the compression of time is very important here.
And you have 29 seconds to press the button. And the system recommends pressing the button. 28, 27, 26, how many times do you think the captain of that ship will not press the button? They'll press the button. So that's an example where the system is designed to have human agency, but there's not enough time. So the compression of time is very important here.
And one of the core issues, and you mentioned this before, is these computers are moving so quickly. Another example that I like to use is, I don't know if you know, but there was a war and the war was that North Korea attacked America in cyberspace. America got ready to counterattack and China shut North Korea down.
And one of the core issues, and you mentioned this before, is these computers are moving so quickly. Another example that I like to use is, I don't know if you know, but there was a war and the war was that North Korea attacked America in cyberspace. America got ready to counterattack and China shut North Korea down.
Oh, and by the way, the entire war took a hundred milliseconds, less than a second. Now, how do you think about that? Now, obviously that war has not occurred yet, but is it possible? Absolutely. How do you do a human agency under the compression of time?
Oh, and by the way, the entire war took a hundred milliseconds, less than a second. Now, how do you think about that? Now, obviously that war has not occurred yet, but is it possible? Absolutely. How do you do a human agency under the compression of time?
Well, there are many such scenarios and they go something like this. At some point, the computer's objective function, what it's being trained against is broad enough that it decides that lying to us is a good idea because it knows we're watching. Now, is this a possible scenario? Absolutely. Am I worried about it? No, because I think I'm much more worried about, I think the positive is clear.
Well, there are many such scenarios and they go something like this. At some point, the computer's objective function, what it's being trained against is broad enough that it decides that lying to us is a good idea because it knows we're watching. Now, is this a possible scenario? Absolutely. Am I worried about it? No, because I think I'm much more worried about, I think the positive is clear.
Human plus AI is incredibly powerful. That also means that human plus AI is incredibly dangerous with the wrong human. I know these are all very interesting, the AI overlords and so forth, and they could take us and turn us into dogs, as I mentioned earlier. It's much more likely that the dangers will be because of human control over systems that are more powerful than they should be.
Human plus AI is incredibly powerful. That also means that human plus AI is incredibly dangerous with the wrong human. I know these are all very interesting, the AI overlords and so forth, and they could take us and turn us into dogs, as I mentioned earlier. It's much more likely that the dangers will be because of human control over systems that are more powerful than they should be.