Menu
Sign In Search Podcasts Charts People & Topics Add Podcast API Pricing

Roman Yampolsky

๐Ÿ‘ค Person
771 total appearances

Appearances Over Time

Podcast Appearances

Lex Fridman Podcast
#431 โ€“ Roman Yampolskiy: Dangers of Superintelligent AI

For millennia, we tried developing morals, ethics, religions, lie detector tests, and then employees betray the employers, spouses betray family. It's a pretty standard thing intelligent agents sometimes do.

Lex Fridman Podcast
#431 โ€“ Roman Yampolskiy: Dangers of Superintelligent AI

For millennia, we tried developing morals, ethics, religions, lie detector tests, and then employees betray the employers, spouses betray family. It's a pretty standard thing intelligent agents sometimes do.

Lex Fridman Podcast
#431 โ€“ Roman Yampolskiy: Dangers of Superintelligent AI

If you know the truth and it tells you something false, you can detect that, but you cannot know in general every single time. And again, the system you're testing today may not be lying, The system you're testing today may know you are testing it and so behaving.

Lex Fridman Podcast
#431 โ€“ Roman Yampolskiy: Dangers of Superintelligent AI

If you know the truth and it tells you something false, you can detect that, but you cannot know in general every single time. And again, the system you're testing today may not be lying, The system you're testing today may know you are testing it and so behaving.

Lex Fridman Podcast
#431 โ€“ Roman Yampolskiy: Dangers of Superintelligent AI

If you know the truth and it tells you something false, you can detect that, but you cannot know in general every single time. And again, the system you're testing today may not be lying, The system you're testing today may know you are testing it and so behaving.

Lex Fridman Podcast
#431 โ€“ Roman Yampolskiy: Dangers of Superintelligent AI

And later on, after it interacts with the environment, interacts with other systems, malevolent agents, learns more, it may start doing those things.

Lex Fridman Podcast
#431 โ€“ Roman Yampolskiy: Dangers of Superintelligent AI

And later on, after it interacts with the environment, interacts with other systems, malevolent agents, learns more, it may start doing those things.

Lex Fridman Podcast
#431 โ€“ Roman Yampolskiy: Dangers of Superintelligent AI

And later on, after it interacts with the environment, interacts with other systems, malevolent agents, learns more, it may start doing those things.

Lex Fridman Podcast
#431 โ€“ Roman Yampolskiy: Dangers of Superintelligent AI

So systems today don't have long-term planning. That is not our, they can lie today if it optimizes, helps them optimize the reward. If they realize, okay, this human will be very happy if I tell them the following, they will do it if it brings them more points. And they don't have to kind of keep track of it. It's just the right answer to this problem every single time.

Lex Fridman Podcast
#431 โ€“ Roman Yampolskiy: Dangers of Superintelligent AI

So systems today don't have long-term planning. That is not our, they can lie today if it optimizes, helps them optimize the reward. If they realize, okay, this human will be very happy if I tell them the following, they will do it if it brings them more points. And they don't have to kind of keep track of it. It's just the right answer to this problem every single time.

Lex Fridman Podcast
#431 โ€“ Roman Yampolskiy: Dangers of Superintelligent AI

So systems today don't have long-term planning. That is not our, they can lie today if it optimizes, helps them optimize the reward. If they realize, okay, this human will be very happy if I tell them the following, they will do it if it brings them more points. And they don't have to kind of keep track of it. It's just the right answer to this problem every single time.

Lex Fridman Podcast
#431 โ€“ Roman Yampolskiy: Dangers of Superintelligent AI

Well, some people think that if they're that smart, they're always good. They really do believe that. It's just benevolence from intelligence. So they'll always want what's best for us. Some people think that they will be able to detect problem behaviors and correct them at the time when we get there. I don't think it's a good idea. I am strongly against it.

Lex Fridman Podcast
#431 โ€“ Roman Yampolskiy: Dangers of Superintelligent AI

Well, some people think that if they're that smart, they're always good. They really do believe that. It's just benevolence from intelligence. So they'll always want what's best for us. Some people think that they will be able to detect problem behaviors and correct them at the time when we get there. I don't think it's a good idea. I am strongly against it.

Lex Fridman Podcast
#431 โ€“ Roman Yampolskiy: Dangers of Superintelligent AI

Well, some people think that if they're that smart, they're always good. They really do believe that. It's just benevolence from intelligence. So they'll always want what's best for us. Some people think that they will be able to detect problem behaviors and correct them at the time when we get there. I don't think it's a good idea. I am strongly against it.

Lex Fridman Podcast
#431 โ€“ Roman Yampolskiy: Dangers of Superintelligent AI

But yeah, there are quite a few people who, in general, are so optimistic about this technology, it could do no wrong. They want it developed as soon as possible, as capable as possible.

Lex Fridman Podcast
#431 โ€“ Roman Yampolskiy: Dangers of Superintelligent AI

But yeah, there are quite a few people who, in general, are so optimistic about this technology, it could do no wrong. They want it developed as soon as possible, as capable as possible.

Lex Fridman Podcast
#431 โ€“ Roman Yampolskiy: Dangers of Superintelligent AI

But yeah, there are quite a few people who, in general, are so optimistic about this technology, it could do no wrong. They want it developed as soon as possible, as capable as possible.

Lex Fridman Podcast
#431 โ€“ Roman Yampolskiy: Dangers of Superintelligent AI

There are even people who say, okay, what's so special about humans, right? We removed the gender bias. We're removing race bias. Why is this pro-human bias? We are polluting the planet. We are, as you said, you know, fight a lot of wars, kind of violent. Maybe it's better if a super intelligent, perfect society comes and replaces us. It's normal stage in the evolution of our species.

Lex Fridman Podcast
#431 โ€“ Roman Yampolskiy: Dangers of Superintelligent AI

There are even people who say, okay, what's so special about humans, right? We removed the gender bias. We're removing race bias. Why is this pro-human bias? We are polluting the planet. We are, as you said, you know, fight a lot of wars, kind of violent. Maybe it's better if a super intelligent, perfect society comes and replaces us. It's normal stage in the evolution of our species.

Lex Fridman Podcast
#431 โ€“ Roman Yampolskiy: Dangers of Superintelligent AI

There are even people who say, okay, what's so special about humans, right? We removed the gender bias. We're removing race bias. Why is this pro-human bias? We are polluting the planet. We are, as you said, you know, fight a lot of wars, kind of violent. Maybe it's better if a super intelligent, perfect society comes and replaces us. It's normal stage in the evolution of our species.