Roman Yampolsky
๐ค PersonAppearances Over Time
Podcast Appearances
So when I think about it, I usually think human with a paper and a pencil, not human with internet and other AI helping.
But we create AI. So at any point, you'll still just add superintelligence to human capability? That seems like cheating.
But we create AI. So at any point, you'll still just add superintelligence to human capability? That seems like cheating.
But we create AI. So at any point, you'll still just add superintelligence to human capability? That seems like cheating.
It seems like a hybrid of some kind. You're now doing brain-computer interfaces. You're connecting it to maybe narrow AIs. Yeah, it definitely increases our capabilities.
It seems like a hybrid of some kind. You're now doing brain-computer interfaces. You're connecting it to maybe narrow AIs. Yeah, it definitely increases our capabilities.
It seems like a hybrid of some kind. You're now doing brain-computer interfaces. You're connecting it to maybe narrow AIs. Yeah, it definitely increases our capabilities.
I am old fashioned. I like Turing test. I have a paper where I equate passing Turing test to solving AI complete problems, because you can encode any questions about any domain into the Turing test. You don't have to talk about how was your day? You can ask anything. And so the system has to be as smart as a human to pass it in a true sense.
I am old fashioned. I like Turing test. I have a paper where I equate passing Turing test to solving AI complete problems, because you can encode any questions about any domain into the Turing test. You don't have to talk about how was your day? You can ask anything. And so the system has to be as smart as a human to pass it in a true sense.
I am old fashioned. I like Turing test. I have a paper where I equate passing Turing test to solving AI complete problems, because you can encode any questions about any domain into the Turing test. You don't have to talk about how was your day? You can ask anything. And so the system has to be as smart as a human to pass it in a true sense.
It has to be long enough to where you can make some meaningful decisions about capabilities, absolutely. You can brute force very short conversations.
It has to be long enough to where you can make some meaningful decisions about capabilities, absolutely. You can brute force very short conversations.
It has to be long enough to where you can make some meaningful decisions about capabilities, absolutely. You can brute force very short conversations.
For AGI, it has to be there. I cannot give it a task I can give to a human, and it cannot do it if a human can. For superintelligence, it would be superior on all such tasks, not just average performance. Go learn to drive a car. Go speak Chinese. Play guitar. Okay, great.
For AGI, it has to be there. I cannot give it a task I can give to a human, and it cannot do it if a human can. For superintelligence, it would be superior on all such tasks, not just average performance. Go learn to drive a car. Go speak Chinese. Play guitar. Okay, great.
For AGI, it has to be there. I cannot give it a task I can give to a human, and it cannot do it if a human can. For superintelligence, it would be superior on all such tasks, not just average performance. Go learn to drive a car. Go speak Chinese. Play guitar. Okay, great.
You can develop a test which will give you positives if it lies to you or has those ideas. You cannot develop a test which rules them out. There is always possibility of what Bostrom calls a treacherous turn, where later on a system decides for game theoretic reasons, economic reasons to change its behavior. And we see the same with humans. It's not unique to AI.
You can develop a test which will give you positives if it lies to you or has those ideas. You cannot develop a test which rules them out. There is always possibility of what Bostrom calls a treacherous turn, where later on a system decides for game theoretic reasons, economic reasons to change its behavior. And we see the same with humans. It's not unique to AI.
You can develop a test which will give you positives if it lies to you or has those ideas. You cannot develop a test which rules them out. There is always possibility of what Bostrom calls a treacherous turn, where later on a system decides for game theoretic reasons, economic reasons to change its behavior. And we see the same with humans. It's not unique to AI.
For millennia, we tried developing morals, ethics, religions, lie detector tests, and then employees betray the employers, spouses betray family. It's a pretty standard thing intelligent agents sometimes do.