Menu
Sign In Search Podcasts Charts People & Topics Add Podcast API Pricing

Roman Yampolsky

👤 Person
771 total appearances

Appearances Over Time

Podcast Appearances

Lex Fridman Podcast
#431 – Roman Yampolskiy: Dangers of Superintelligent AI

People have individually very different preferences politically and such. So even if we somehow managed all the other aspects of it, programming those fuzzy concepts in, getting AI to follow them closely, we don't agree on what to program in. So my solution was, okay, we don't have to compromise on room temperature. You have your universe, I have mine. whatever you want.

Lex Fridman Podcast
#431 – Roman Yampolskiy: Dangers of Superintelligent AI

People have individually very different preferences politically and such. So even if we somehow managed all the other aspects of it, programming those fuzzy concepts in, getting AI to follow them closely, we don't agree on what to program in. So my solution was, okay, we don't have to compromise on room temperature. You have your universe, I have mine. whatever you want.

Lex Fridman Podcast
#431 – Roman Yampolskiy: Dangers of Superintelligent AI

People have individually very different preferences politically and such. So even if we somehow managed all the other aspects of it, programming those fuzzy concepts in, getting AI to follow them closely, we don't agree on what to program in. So my solution was, okay, we don't have to compromise on room temperature. You have your universe, I have mine. whatever you want.

Lex Fridman Podcast
#431 – Roman Yampolskiy: Dangers of Superintelligent AI

And if you like me, you can invite me to visit your universe. We don't have to be independent, but the point is you can be. And virtual reality is getting pretty good. It's going to hit a point where you can't tell the difference. And if you can't tell if it's real or not, what's the difference?

Lex Fridman Podcast
#431 – Roman Yampolskiy: Dangers of Superintelligent AI

And if you like me, you can invite me to visit your universe. We don't have to be independent, but the point is you can be. And virtual reality is getting pretty good. It's going to hit a point where you can't tell the difference. And if you can't tell if it's real or not, what's the difference?

Lex Fridman Podcast
#431 – Roman Yampolskiy: Dangers of Superintelligent AI

And if you like me, you can invite me to visit your universe. We don't have to be independent, but the point is you can be. And virtual reality is getting pretty good. It's going to hit a point where you can't tell the difference. And if you can't tell if it's real or not, what's the difference?

Lex Fridman Podcast
#431 – Roman Yampolskiy: Dangers of Superintelligent AI

You still have to align with that individual. They have to be happy in that simulation. But it's a much easier problem to align with one agent versus 8 billion agents plus animals, aliens.

Lex Fridman Podcast
#431 – Roman Yampolskiy: Dangers of Superintelligent AI

You still have to align with that individual. They have to be happy in that simulation. But it's a much easier problem to align with one agent versus 8 billion agents plus animals, aliens.

Lex Fridman Podcast
#431 – Roman Yampolskiy: Dangers of Superintelligent AI

You still have to align with that individual. They have to be happy in that simulation. But it's a much easier problem to align with one agent versus 8 billion agents plus animals, aliens.

Lex Fridman Podcast
#431 – Roman Yampolskiy: Dangers of Superintelligent AI

I'm trying to do that, yeah.

Lex Fridman Podcast
#431 – Roman Yampolskiy: Dangers of Superintelligent AI

I'm trying to do that, yeah.

Lex Fridman Podcast
#431 – Roman Yampolskiy: Dangers of Superintelligent AI

I'm trying to do that, yeah.

Lex Fridman Podcast
#431 – Roman Yampolskiy: Dangers of Superintelligent AI

It seems contradictory. I haven't seen anyone explain what it means outside of kind of words which pack a lot, make it good, make it desirable, make it something they don't regret. But how do you specifically formalize those notions? How do you program them in? I haven't seen anyone make progress on that so far.

Lex Fridman Podcast
#431 – Roman Yampolskiy: Dangers of Superintelligent AI

It seems contradictory. I haven't seen anyone explain what it means outside of kind of words which pack a lot, make it good, make it desirable, make it something they don't regret. But how do you specifically formalize those notions? How do you program them in? I haven't seen anyone make progress on that so far.

Lex Fridman Podcast
#431 – Roman Yampolskiy: Dangers of Superintelligent AI

It seems contradictory. I haven't seen anyone explain what it means outside of kind of words which pack a lot, make it good, make it desirable, make it something they don't regret. But how do you specifically formalize those notions? How do you program them in? I haven't seen anyone make progress on that so far.

Lex Fridman Podcast
#431 – Roman Yampolskiy: Dangers of Superintelligent AI

Right. But the examples you gave, some of them are, for example, two different religions saying this is our holy site and we are not willing to compromise it in any way. If you can make two holy sites in virtual worlds, you solve the problem. But if you only have one, it's not divisible. You're kind of stuck there.

Lex Fridman Podcast
#431 – Roman Yampolskiy: Dangers of Superintelligent AI

Right. But the examples you gave, some of them are, for example, two different religions saying this is our holy site and we are not willing to compromise it in any way. If you can make two holy sites in virtual worlds, you solve the problem. But if you only have one, it's not divisible. You're kind of stuck there.

Lex Fridman Podcast
#431 – Roman Yampolskiy: Dangers of Superintelligent AI

Right. But the examples you gave, some of them are, for example, two different religions saying this is our holy site and we are not willing to compromise it in any way. If you can make two holy sites in virtual worlds, you solve the problem. But if you only have one, it's not divisible. You're kind of stuck there.

Lex Fridman Podcast
#431 – Roman Yampolskiy: Dangers of Superintelligent AI

If we go back to that idea of simulation and this is entertainment kind of giving meaning to us, the question is how much suffering is reasonable for a video game? So yeah, I don't mind a video game where I get haptic feedback, there is a little bit of shaking, maybe I'm a little scared. I don't want a game where kids are tortured, literally. That seems unethical, at least by our human standards.

Lex Fridman Podcast
#431 – Roman Yampolskiy: Dangers of Superintelligent AI

If we go back to that idea of simulation and this is entertainment kind of giving meaning to us, the question is how much suffering is reasonable for a video game? So yeah, I don't mind a video game where I get haptic feedback, there is a little bit of shaking, maybe I'm a little scared. I don't want a game where kids are tortured, literally. That seems unethical, at least by our human standards.