Menu
Sign In Search Podcasts Charts People & Topics Add Podcast API Pricing

Roman Yampolsky

๐Ÿ‘ค Speaker
771 total appearances

Appearances Over Time

Podcast Appearances

Lex Fridman Podcast
#431 โ€“ Roman Yampolskiy: Dangers of Superintelligent AI

You can improve the rate at which you are learning. You can become more efficient meta-optimizer.

Lex Fridman Podcast
#431 โ€“ Roman Yampolskiy: Dangers of Superintelligent AI

So if you have fixed code, for example, you can verify that code, static verification at the time. But if it will continue modifying it, you have a much harder time guaranteeing that important properties of that system have not been modified, then the code changed.

Lex Fridman Podcast
#431 โ€“ Roman Yampolskiy: Dangers of Superintelligent AI

So if you have fixed code, for example, you can verify that code, static verification at the time. But if it will continue modifying it, you have a much harder time guaranteeing that important properties of that system have not been modified, then the code changed.

Lex Fridman Podcast
#431 โ€“ Roman Yampolskiy: Dangers of Superintelligent AI

So if you have fixed code, for example, you can verify that code, static verification at the time. But if it will continue modifying it, you have a much harder time guaranteeing that important properties of that system have not been modified, then the code changed.

Lex Fridman Podcast
#431 โ€“ Roman Yampolskiy: Dangers of Superintelligent AI

It can always cheat. It can store parts of its code outside in the environment. It can have kind of extended mind situation. So this is exactly the type of problems I'm trying to bring up.

Lex Fridman Podcast
#431 โ€“ Roman Yampolskiy: Dangers of Superintelligent AI

It can always cheat. It can store parts of its code outside in the environment. It can have kind of extended mind situation. So this is exactly the type of problems I'm trying to bring up.

Lex Fridman Podcast
#431 โ€“ Roman Yampolskiy: Dangers of Superintelligent AI

It can always cheat. It can store parts of its code outside in the environment. It can have kind of extended mind situation. So this is exactly the type of problems I'm trying to bring up.

Lex Fridman Podcast
#431 โ€“ Roman Yampolskiy: Dangers of Superintelligent AI

So I like Oracle types where you kind of just know that it's right. Turing likes Oracle machines. They know the right answer. How? Who knows? But they pull it out from somewhere, so you have to trust them. And that's a concern I have about humans in a world with very smart machines. We experiment with them.

Lex Fridman Podcast
#431 โ€“ Roman Yampolskiy: Dangers of Superintelligent AI

So I like Oracle types where you kind of just know that it's right. Turing likes Oracle machines. They know the right answer. How? Who knows? But they pull it out from somewhere, so you have to trust them. And that's a concern I have about humans in a world with very smart machines. We experiment with them.

Lex Fridman Podcast
#431 โ€“ Roman Yampolskiy: Dangers of Superintelligent AI

So I like Oracle types where you kind of just know that it's right. Turing likes Oracle machines. They know the right answer. How? Who knows? But they pull it out from somewhere, so you have to trust them. And that's a concern I have about humans in a world with very smart machines. We experiment with them.

Lex Fridman Podcast
#431 โ€“ Roman Yampolskiy: Dangers of Superintelligent AI

We see after a while, okay, they've always been right before, and we start trusting them without any verification of what they're saying.

Lex Fridman Podcast
#431 โ€“ Roman Yampolskiy: Dangers of Superintelligent AI

We see after a while, okay, they've always been right before, and we start trusting them without any verification of what they're saying.

Lex Fridman Podcast
#431 โ€“ Roman Yampolskiy: Dangers of Superintelligent AI

We see after a while, okay, they've always been right before, and we start trusting them without any verification of what they're saying.

Lex Fridman Podcast
#431 โ€“ Roman Yampolskiy: Dangers of Superintelligent AI

We remove ourselves from that process. We are not scientists who understand the world. We are humans who get new data presented to us.

Lex Fridman Podcast
#431 โ€“ Roman Yampolskiy: Dangers of Superintelligent AI

We remove ourselves from that process. We are not scientists who understand the world. We are humans who get new data presented to us.

Lex Fridman Podcast
#431 โ€“ Roman Yampolskiy: Dangers of Superintelligent AI

We remove ourselves from that process. We are not scientists who understand the world. We are humans who get new data presented to us.

Lex Fridman Podcast
#431 โ€“ Roman Yampolskiy: Dangers of Superintelligent AI

preserved portion of it can be done. But in terms of mathematical verification, it's kind of useless. You're saying you are the greatest guy in the world because you are saying it. It's circular and not very helpful, but it's consistent. We know that within that world, you have verified that system. In a paper, I try to kind of brute force all possible verifiers.

Lex Fridman Podcast
#431 โ€“ Roman Yampolskiy: Dangers of Superintelligent AI

preserved portion of it can be done. But in terms of mathematical verification, it's kind of useless. You're saying you are the greatest guy in the world because you are saying it. It's circular and not very helpful, but it's consistent. We know that within that world, you have verified that system. In a paper, I try to kind of brute force all possible verifiers.

Lex Fridman Podcast
#431 โ€“ Roman Yampolskiy: Dangers of Superintelligent AI

preserved portion of it can be done. But in terms of mathematical verification, it's kind of useless. You're saying you are the greatest guy in the world because you are saying it. It's circular and not very helpful, but it's consistent. We know that within that world, you have verified that system. In a paper, I try to kind of brute force all possible verifiers.

Lex Fridman Podcast
#431 โ€“ Roman Yampolskiy: Dangers of Superintelligent AI

It doesn't mean that this one is particularly important to us.