Roman Yampolsky
👤 PersonAppearances Over Time
Podcast Appearances
What do people do with all that free time? What happens then? Everything society is built on is completely modified in one generation. It's not a slow process where we get to kind of figure out how to live that new lifestyle, but it's pretty quick.
What do people do with all that free time? What happens then? Everything society is built on is completely modified in one generation. It's not a slow process where we get to kind of figure out how to live that new lifestyle, but it's pretty quick.
It's an option. I have a paper where I try to solve the value alignment problem for multiple agents. And the solution to avoid compromise is to give everyone a personal virtual universe. You can do whatever you want in that world. You could be king, you could be slave, you decide what happens.
It's an option. I have a paper where I try to solve the value alignment problem for multiple agents. And the solution to avoid compromise is to give everyone a personal virtual universe. You can do whatever you want in that world. You could be king, you could be slave, you decide what happens.
It's an option. I have a paper where I try to solve the value alignment problem for multiple agents. And the solution to avoid compromise is to give everyone a personal virtual universe. You can do whatever you want in that world. You could be king, you could be slave, you decide what happens.
So it's basically a glorified video game where you get to enjoy yourself and someone else takes care of your needs and the substrate alignment is the only thing we need to solve. We don't have to get 8 billion humans to agree on anything.
So it's basically a glorified video game where you get to enjoy yourself and someone else takes care of your needs and the substrate alignment is the only thing we need to solve. We don't have to get 8 billion humans to agree on anything.
So it's basically a glorified video game where you get to enjoy yourself and someone else takes care of your needs and the substrate alignment is the only thing we need to solve. We don't have to get 8 billion humans to agree on anything.
Some people say that's what happened. We're in a simulation.
Some people say that's what happened. We're in a simulation.
Some people say that's what happened. We're in a simulation.
And some people choose to play on a more difficult level with more constraints. Some say, okay, I'm just going to enjoy the game, high privilege level. Absolutely.
And some people choose to play on a more difficult level with more constraints. Some say, okay, I'm just going to enjoy the game, high privilege level. Absolutely.
And some people choose to play on a more difficult level with more constraints. Some say, okay, I'm just going to enjoy the game, high privilege level. Absolutely.
Personal universes. Personal universes.
Personal universes. Personal universes.
Personal universes. Personal universes.
In order to solve value alignment problem, I'm trying to formalize it a little better. Usually, we're talking about getting AIs to do what we want, which is not well-defined. We're talking about creator of the system, owner of that AI, humanity as a whole, but we don't agree on much. There is no universally accepted ethics, morals across cultures, religions.
In order to solve value alignment problem, I'm trying to formalize it a little better. Usually, we're talking about getting AIs to do what we want, which is not well-defined. We're talking about creator of the system, owner of that AI, humanity as a whole, but we don't agree on much. There is no universally accepted ethics, morals across cultures, religions.
In order to solve value alignment problem, I'm trying to formalize it a little better. Usually, we're talking about getting AIs to do what we want, which is not well-defined. We're talking about creator of the system, owner of that AI, humanity as a whole, but we don't agree on much. There is no universally accepted ethics, morals across cultures, religions.