Menu
Sign In Search Podcasts Charts People & Topics Add Podcast API Pricing

Roman Yampolsky

๐Ÿ‘ค Person
771 total appearances

Appearances Over Time

Podcast Appearances

Lex Fridman Podcast
#431 โ€“ Roman Yampolskiy: Dangers of Superintelligent AI

So when I think about it, I usually think human with a paper and a pencil, not human with internet and other AI helping.

Lex Fridman Podcast
#431 โ€“ Roman Yampolskiy: Dangers of Superintelligent AI

But we create AI. So at any point, you'll still just add superintelligence to human capability? That seems like cheating.

Lex Fridman Podcast
#431 โ€“ Roman Yampolskiy: Dangers of Superintelligent AI

But we create AI. So at any point, you'll still just add superintelligence to human capability? That seems like cheating.

Lex Fridman Podcast
#431 โ€“ Roman Yampolskiy: Dangers of Superintelligent AI

But we create AI. So at any point, you'll still just add superintelligence to human capability? That seems like cheating.

Lex Fridman Podcast
#431 โ€“ Roman Yampolskiy: Dangers of Superintelligent AI

It seems like a hybrid of some kind. You're now doing brain-computer interfaces. You're connecting it to maybe narrow AIs. Yeah, it definitely increases our capabilities.

Lex Fridman Podcast
#431 โ€“ Roman Yampolskiy: Dangers of Superintelligent AI

It seems like a hybrid of some kind. You're now doing brain-computer interfaces. You're connecting it to maybe narrow AIs. Yeah, it definitely increases our capabilities.

Lex Fridman Podcast
#431 โ€“ Roman Yampolskiy: Dangers of Superintelligent AI

It seems like a hybrid of some kind. You're now doing brain-computer interfaces. You're connecting it to maybe narrow AIs. Yeah, it definitely increases our capabilities.

Lex Fridman Podcast
#431 โ€“ Roman Yampolskiy: Dangers of Superintelligent AI

I am old fashioned. I like Turing test. I have a paper where I equate passing Turing test to solving AI complete problems, because you can encode any questions about any domain into the Turing test. You don't have to talk about how was your day? You can ask anything. And so the system has to be as smart as a human to pass it in a true sense.

Lex Fridman Podcast
#431 โ€“ Roman Yampolskiy: Dangers of Superintelligent AI

I am old fashioned. I like Turing test. I have a paper where I equate passing Turing test to solving AI complete problems, because you can encode any questions about any domain into the Turing test. You don't have to talk about how was your day? You can ask anything. And so the system has to be as smart as a human to pass it in a true sense.

Lex Fridman Podcast
#431 โ€“ Roman Yampolskiy: Dangers of Superintelligent AI

I am old fashioned. I like Turing test. I have a paper where I equate passing Turing test to solving AI complete problems, because you can encode any questions about any domain into the Turing test. You don't have to talk about how was your day? You can ask anything. And so the system has to be as smart as a human to pass it in a true sense.

Lex Fridman Podcast
#431 โ€“ Roman Yampolskiy: Dangers of Superintelligent AI

It has to be long enough to where you can make some meaningful decisions about capabilities, absolutely. You can brute force very short conversations.

Lex Fridman Podcast
#431 โ€“ Roman Yampolskiy: Dangers of Superintelligent AI

It has to be long enough to where you can make some meaningful decisions about capabilities, absolutely. You can brute force very short conversations.

Lex Fridman Podcast
#431 โ€“ Roman Yampolskiy: Dangers of Superintelligent AI

It has to be long enough to where you can make some meaningful decisions about capabilities, absolutely. You can brute force very short conversations.

Lex Fridman Podcast
#431 โ€“ Roman Yampolskiy: Dangers of Superintelligent AI

For AGI, it has to be there. I cannot give it a task I can give to a human, and it cannot do it if a human can. For superintelligence, it would be superior on all such tasks, not just average performance. Go learn to drive a car. Go speak Chinese. Play guitar. Okay, great.

Lex Fridman Podcast
#431 โ€“ Roman Yampolskiy: Dangers of Superintelligent AI

For AGI, it has to be there. I cannot give it a task I can give to a human, and it cannot do it if a human can. For superintelligence, it would be superior on all such tasks, not just average performance. Go learn to drive a car. Go speak Chinese. Play guitar. Okay, great.

Lex Fridman Podcast
#431 โ€“ Roman Yampolskiy: Dangers of Superintelligent AI

For AGI, it has to be there. I cannot give it a task I can give to a human, and it cannot do it if a human can. For superintelligence, it would be superior on all such tasks, not just average performance. Go learn to drive a car. Go speak Chinese. Play guitar. Okay, great.

Lex Fridman Podcast
#431 โ€“ Roman Yampolskiy: Dangers of Superintelligent AI

You can develop a test which will give you positives if it lies to you or has those ideas. You cannot develop a test which rules them out. There is always possibility of what Bostrom calls a treacherous turn, where later on a system decides for game theoretic reasons, economic reasons to change its behavior. And we see the same with humans. It's not unique to AI.

Lex Fridman Podcast
#431 โ€“ Roman Yampolskiy: Dangers of Superintelligent AI

You can develop a test which will give you positives if it lies to you or has those ideas. You cannot develop a test which rules them out. There is always possibility of what Bostrom calls a treacherous turn, where later on a system decides for game theoretic reasons, economic reasons to change its behavior. And we see the same with humans. It's not unique to AI.

Lex Fridman Podcast
#431 โ€“ Roman Yampolskiy: Dangers of Superintelligent AI

You can develop a test which will give you positives if it lies to you or has those ideas. You cannot develop a test which rules them out. There is always possibility of what Bostrom calls a treacherous turn, where later on a system decides for game theoretic reasons, economic reasons to change its behavior. And we see the same with humans. It's not unique to AI.

Lex Fridman Podcast
#431 โ€“ Roman Yampolskiy: Dangers of Superintelligent AI

For millennia, we tried developing morals, ethics, religions, lie detector tests, and then employees betray the employers, spouses betray family. It's a pretty standard thing intelligent agents sometimes do.