Menu
Sign In Pricing Add Podcast
Podcast Image

Lex Fridman Podcast

#431 – Roman Yampolskiy: Dangers of Superintelligent AI

Sun, 02 Jun 2024

Description

Roman Yampolskiy is an AI safety researcher and author of a new book titled AI: Unexplainable, Unpredictable, Uncontrollable. Please support this podcast by checking out our sponsors: - Yahoo Finance: https://yahoofinance.com - MasterClass: https://masterclass.com/lexpod to get 15% off - NetSuite: http://netsuite.com/lex to get free product tour - LMNT: https://drinkLMNT.com/lex to get free sample pack - Eight Sleep: https://eightsleep.com/lex to get $350 off Transcript: https://lexfridman.com/roman-yampolskiy-transcript EPISODE LINKS: Roman's X: https://twitter.com/romanyam Roman's Website: http://cecs.louisville.edu/ry Roman's AI book: https://amzn.to/4aFZuPb PODCAST INFO: Podcast website: https://lexfridman.com/podcast Apple Podcasts: https://apple.co/2lwqZIr Spotify: https://spoti.fi/2nEwCF8 RSS: https://lexfridman.com/feed/podcast/ YouTube Full Episodes: https://youtube.com/lexfridman YouTube Clips: https://youtube.com/lexclips SUPPORT & CONNECT: - Check out the sponsors above, it's the best way to support this podcast - Support on Patreon: https://www.patreon.com/lexfridman - Twitter: https://twitter.com/lexfridman - Instagram: https://www.instagram.com/lexfridman - LinkedIn: https://www.linkedin.com/in/lexfridman - Facebook: https://www.facebook.com/lexfridman - Medium: https://medium.com/@lexfridman OUTLINE: Here's the timestamps for the episode. On some podcast players you should be able to click the timestamp to jump to that time. (00:00) - Introduction (09:12) - Existential risk of AGI (15:25) - Ikigai risk (23:37) - Suffering risk (27:12) - Timeline to AGI (31:44) - AGI turing test (37:06) - Yann LeCun and open source AI (49:58) - AI control (52:26) - Social engineering (54:59) - Fearmongering (1:04:49) - AI deception (1:11:23) - Verification (1:18:22) - Self-improving AI (1:30:34) - Pausing AI development (1:36:51) - AI Safety (1:46:35) - Current AI (1:51:58) - Simulation (1:59:16) - Aliens (2:00:50) - Human mind (2:07:10) - Neuralink (2:16:15) - Hope for the future (2:20:11) - Meaning of life

Audio
Featured in this Episode
Transcription

0.049 - 25.981 Lex Fridman

The following is a conversation with Roman Yampolsky, an AI safety and security researcher and author of a new book titled AI Unexplainable, Unpredictable, Uncontrollable. He argues that there's almost 100% chance that AGI will eventually destroy human civilization. As an aside, let me say that I will have many, often technical conversations on the topic of AI.

0
💬 0

27.023 - 48.409 Lex Fridman

often with engineers building the state-of-the-art AI systems. I would say those folks put the infamous P-Doom or the probability of AGI killing all humans at around 1 to 20%. But it's also important to talk to folks who put that value at 70, 80, 90, and is, in the case of Roman, at 99.99 and many more nines percent.

0
💬 0

52 - 79.118 Lex Fridman

I'm personally excited for the future and believe it will be a good one, in part because of the amazing technological innovation we humans create. But we must absolutely not do so with blinders on, ignoring the possible risks, including existential risks of those technologies. That's what this conversation is about. And now, a quick few second mention of each sponsor.

0
💬 0

79.658 - 104.806 Lex Fridman

Check them out in the description. It's the best way to support this podcast. We got Yahoo Finance for investors, Masterclass for learning, NetSuite for business, Element for hydration, and 8sleep for sweet, sweet naps. Choose wisely, my friends. Also, if you want to get in touch with me, or for whatever reason, work with our amazing team, let's say, just go to lexfriedman.com slash contact.

0
💬 0

105.586 - 130.734 Lex Fridman

And now onto the full ad reads. As always, no ads in the middle. I try to make these interesting, but if you must skip them, friends, please still check out our sponsors. I enjoy their stuff. Maybe you will too. This episode is brought to you by Yahoo Finance, a site that provides financial management, reports, information, and news for investors. It's my main go-to place for financial stuff.

0
💬 0

130.754 - 158.815 Lex Fridman

I also added my portfolio to it. I guess it used to be TD Ameritrade, and then I got transported, transformed, moved to Charles Schwab. I guess that was an acquisition of some sort. I have not been paying attention. All I know is I hate change and trying to figure out the new interface of Schwab when I log in once a year or however long I log in is just annoying.

0
💬 0

159.755 - 185.356 Lex Fridman

Anyway, one of the ways to avoid that annoyance is tracking information about my portfolio from Yahoo Finance. So you can drag over your portfolio and in the same place find out all the news, analysis, information, all that kind of stuff. Anyway, for comprehensive financial news and analysis, go to YahooFinance.com. That's YahooFinance.com. I don't know why I whispered that.

0
💬 0

186.017 - 212.721 Lex Fridman

This episode is also brought to you by MasterClass, where you can watch over 180 classes from the best people in the world in their respective disciplines. We got Aaron Franklin on barbecue and brisket, something I watched recently, and I love brisket. I love barbecue. It's one of my favorite things about Austin. It's funny when the obvious cliche thing is also the thing that brings you joy.

0
💬 0

213.702 - 239.227 Lex Fridman

So it almost doesn't feel genuine to say, but I really love barbecue. My favorite place to go is probably Terry Black's. I've had Franklin's a couple times. It's also amazing. I actually don't remember myself having bad barbecue or even mediocre barbecue in Austin. So it's hard to pick favorites because it really boils down to the experience you have when you're sitting there.

0
💬 0

239.247 - 260.347 Lex Fridman

One of my favorite places to sit is Terry Black's. They have this, I don't know, it feels like a tavern. I feel like a cowboy. Like I just robbed a bank in some... Town in the middle of nowhere in West Texas. And I'm just sitting down for some good barbecue. And the sheriffs walk in and there's a gunfight and all that, as usual.

0
💬 0

260.967 - 284.98 Lex Fridman

Anyway, get unlimited access to every Masterclass and get an additional 15% off an annual membership at masterclass.com. That's masterclass.com. This episode is also brought to you by NetSuite, an all-in-one cloud business management system. One of the most fulfilling things in life is the people you surround yourself with. Just like in the movie 300.

0
💬 0

285.2 - 312.503 Lex Fridman

All it takes is 300 people to do some incredible stuff. But they all have to be shredded. It's really, really important to look good with your, no. It's really, really important to always be ready for war in physical and mental shape. No, not really, but I guess if that's your thing, happiness is the thing you should be chasing, and there's a lot of ways to achieve that.

0
💬 0

313.103 - 340.408 Lex Fridman

For me, being in shape is one of the things that make me happy, because I can move about the world and have a lightness to my physical being if I'm in good shape. Anyway, I say all that because getting a strong team together and having them operate as an efficient, powerful machine is really important for the success of the team, for the happiness of the team, and the individuals in that team.

0
💬 0

341.148 - 360.212 Lex Fridman

NetSuite is a great system that runs the machine inside the machine for any size business. 37,000 companies have upgraded to NetSuite by Oracle. Take advantage of NetSuite's flexible financing plan at netsuite.com slash lex. That's netsuite.com slash lex.

0
💬 0

361.92 - 385.005 Lex Fridman

This episode was also brought to you by Element, electrolyte drink mix of sodium, potassium, and magnesium that I've been consuming multiple times a day. Watermelon salt is my favorite. Whenever you see me drink from a cup on the podcast, almost always it's going to be water with some element in it. I use an empty Powerade bottle, 28 fluid ounces, fill it with water.

0
💬 0

385.565 - 406.499 Lex Fridman

put one packet of watermelon salt element in it, mix it up, put it in the fridge. And then when it's time to drink, I take it out of the fridge and I drink it. And I drink a lot of those a day and it feels good. It's delicious. Whenever I do crazy physical fasting, all that kind of stuff, element is always by my side.

0
💬 0

406.519 - 423.527 Lex Fridman

And then more and more, you're going to see probably the sparkling water thing or whatever that element is making. So it's in a can and it's freaking delicious. There's four flavors. The lemon one is the only one I don't like. The other three I really love, and I forget their names, but they're freaking delicious.

0
💬 0

424.047 - 441.597 Lex Fridman

And you're going to see it more and more on my desk, except for the fact that I run out very quickly because I consume them very quickly. Get a simple pack for free with any purchase. Try it at drinkelement.com. This episode is also brought to you by Asleep, and it's pod for ultra. This thing is amazing.

0
💬 0

441.857 - 464.544 Lex Fridman

The ultra part of that adds a base that goes between the mattress and the bed frame and can elevate to like a reading position. So it modifies the positioning of the bed frame. So on top of all the cooling and heating and all that kind of stuff they can do and do it better in the Pod 4, I think it has 2x the cooling power of Pod 3. So they're improving on the main thing that they do.

0
💬 0

464.744 - 491.537 Lex Fridman

But also there's the ultra part that can adjust the bed height. It can cool down each side of the bed to 20 degrees Fahrenheit below room temperature. One of my favorite things is to escape the world on a cool bed with a warm blanket and just disappear for 20 minutes or for 8 hours into a dream world where everything is possible where everything is allowed.

0
💬 0

491.557 - 518.403 Lex Fridman

It's a chance to explore the Jungian shadow. The good, the bad, and the ugly. But it's usually good. It's usually awesome. And I actually don't dream that much, but when I do, it's awesome. The whole point, though, is that I wake up refreshed. Taking your sleep seriously is really, really important. When you get a chance to sleep, do it in style. And do it on a bed that's awesome.

0
💬 0

519.603 - 558.506 Lex Fridman

Go to 8sleep.com slash Lex and use code Lex to get $350 off the Pod 4 Ultra. This is the Lex Friedman Podcast. To support it, please check out our sponsors in the description. And now, dear friends, here's Roman Yampolsky. What to you is the probability that superintelligent AI will destroy all human civilization?

0
💬 0

558.826 - 559.667 Roman Yampolsky

What's the timeframe?

0
💬 0

560.147 - 562.068 Lex Fridman

Let's say 100 years, in the next 100 years.

0
💬 0

562.568 - 593.495 Roman Yampolsky

So the problem of controlling AGI or superintelligence, in my opinion, is like a problem of creating a perpetual safety machine. By analogy with perpetual motion machine, it's impossible. Yeah, we may succeed and do a good job with GPT-5, 6, 7, but they just keep improving, learning, eventually self-modifying, interacting with the environment, interacting with malevolent actors.

0
💬 0

594.98 - 615.259 Roman Yampolsky

The difference between cybersecurity, narrow AI safety, and safety for general AI for superintelligence is that we don't get a second chance. With cybersecurity, somebody hacks your account, what's the big deal? You get a new password, new credit card, you move on. Here, if we're talking about existential risks, you only get one chance.

0
💬 0

615.879 - 628.795 Roman Yampolsky

So you're really asking me, what are the chances that we'll create the most complex software ever on the first try with zero bugs, and it will continue to have zero bugs for 100 years or more?

0
💬 0

631.089 - 647.981 Lex Fridman

So there is an incremental improvement of systems leading up to AGI. To you, it doesn't matter if we can keep those safe. There's going to be one level of system at which you cannot possibly control it.

0
💬 0

649.823 - 672.605 Roman Yampolsky

I don't think we so far have made any system safe. At the level of capability they display, they already have made mistakes. We had accidents. They've been jailbroken. I don't think there is a single large language model today which no one was successful at making do something developers didn't intend it to do.

0
💬 0

673.689 - 690.75 Lex Fridman

But there's a difference between getting it to do something unintended, getting it to do something that's painful, costly, destructive, and something that's destructive to the level of hurting billions of people, or hundreds of millions of people, billions of people, or the entirety of human civilization. That's a big leap.

0
💬 0

691.784 - 707.553 Roman Yampolsky

Exactly. But the systems we have today have capability of causing X amount of damage. So when they fail, that's all we get. If we develop systems capable of impacting all of humanity, all of universe, the damage is proportionate.

0
💬 0

708.55 - 715.475 Lex Fridman

What to you are the possible ways that such kind of mass murder of humans can happen?

0
💬 0

716.295 - 737.027 Roman Yampolsky

That's obviously a wonderful question. So one of the chapters in my new book is about unpredictability. I argue that we cannot predict what a smarter system will do. So you're really not asking me how superintelligence will kill everyone. You're asking me how I would do it. And I think it's not that interesting. I can tell you about the standard nanotech, synthetic, bio, nuclear.

0
💬 0

737.908 - 747.957 Roman Yampolsky

Superintelligence will come up with something completely new, completely super. We may not even recognize that as a possible path to achieve that goal.

0
💬 0

748.677 - 773.069 Lex Fridman

So there's like a unlimited level of creativity in terms of how humans could be killed. But, you know, we could still investigate possible ways of doing it, not how to do it, but at the end, what is the methodology that does it? You know, shutting off the power, and then humans start killing each other maybe because the resources are really constrained.

0
💬 0

773.73 - 792.476 Lex Fridman

And then there's the actual use of weapons like nuclear weapons or developing artificial pathogens, viruses, that kind of stuff. We could still kind of think through that and defend against it, right? There's a ceiling to the creativity of mass murder of humans here, right? The options are limited.

0
💬 0

793.57 - 814.717 Roman Yampolsky

They are limited by how imaginative we are. If you are that much smarter, that much more creative, you are capable of thinking across multiple domains, do novel research in physics and biology, you may not be limited by those tools. If squirrels were planning to kill humans, they would have a set of possible ways of doing it, but they would never consider things we can come up with.

0
💬 0

814.757 - 835.883 Lex Fridman

So are you thinking about mass murder and destruction of human civilization, or are you thinking of, with squirrels, you put them in a zoo, and they don't really know they're in a zoo? if we just look at the entire set of undesirable trajectories, majority of them are not going to be death. Most of them are going to be just like, things like Brave New World, where

0
💬 0

837.338 - 863.209 Lex Fridman

you know, the squirrels are fed dopamine and they're all like doing some kind of fun activity and the sort of the fire, the soul of humanity is lost because of the drug that's fed to it. Or like literally in a zoo. We're in a zoo, we're doing our thing. We're like playing a game of Sims and the actual players playing that game are AI systems. Those are all undesirable because sort of the free will

0
💬 0

864.605 - 877.39 Lex Fridman

the fire of human consciousness is dimmed through that process, but it's not killing humans. So are you thinking about that, or is the biggest concern literally the extinctions of humans?

0
💬 0

877.95 - 902.179 Roman Yampolsky

I think about a lot of things. So there is X risk, existential risk, everyone's dead. There is S risk, suffering risks, where everyone wishes they were dead. We have also idea for I risk, ikigai risks, where we lost our meaning. The systems can be more creative. They can do all the jobs. It's not obvious what you have to contribute to a world where superintelligence exists.

0
💬 0

902.62 - 925.085 Roman Yampolsky

Of course, you can have all the variants you mentioned where we are safe, we are kept alive, but we are not in control. We are not deciding anything. We are like animals in a zoo. Possibilities we can come up with as very smart humans, and then possibilities something a thousand times smarter can come up with for reasons we cannot comprehend.

0
💬 0

925.782 - 934.049 Lex Fridman

I would love to sort of dig into each of those, X-risk, S-risk, and I-risk. So can you like linger on I-risk? What is that?

0
💬 0

934.589 - 961.548 Roman Yampolsky

So Japanese concept of ikigai, you find something which allows you to make money, you are good at it, and the society says we need it. So like you have this awesome job, you are a podcaster, gives you a lot of meaning, you have a good life, I assume you're happy. That's what we want most people to find, to have. For many intellectuals, it is their occupation which gives them a lot of meaning.

0
💬 0

962.208 - 989.95 Roman Yampolsky

I am a researcher, philosopher, scholar. That means something to me. In a world where an artist is not feeling appreciated because his art is just not competitive with what is produced by machines, or a writer or scientist will lose a lot of that. And at the lower level, we're talking about complete technological unemployment. We're not losing 10% of jobs, we're losing all jobs.

0
💬 0

990.591 - 1008.087 Roman Yampolsky

What do people do with all that free time? What happens then? Everything society is built on is completely modified in one generation. It's not a slow process where we get to kind of figure out how to live that new lifestyle, but it's pretty quick.

0
💬 0

1008.737 - 1036.529 Lex Fridman

In that world, can't humans do what humans currently do with chess, play each other, have tournaments, even though AI systems are far superior at this time in chess. So we just create artificial games. Or for us, they're real, like the Olympics. We do all kinds of different competitions and have fun. Maximize the fun and let the AI focus on the productivity.

0
💬 0

1037.17 - 1052.949 Roman Yampolsky

It's an option. I have a paper where I try to solve the value alignment problem for multiple agents. And the solution to avoid compromise is to give everyone a personal virtual universe. You can do whatever you want in that world. You could be king, you could be slave, you decide what happens.

0
💬 0

1053.33 - 1067.643 Roman Yampolsky

So it's basically a glorified video game where you get to enjoy yourself and someone else takes care of your needs and the substrate alignment is the only thing we need to solve. We don't have to get 8 billion humans to agree on anything.

0
💬 0

1068.652 - 1080.438 Lex Fridman

So, okay, so why is that not a likely outcome? Why can't AI systems create video games for us to lose ourselves in, each with an individual video game universe?

0
💬 0

1081.039 - 1083.62 Roman Yampolsky

Some people say that's what happened. We're in a simulation.

0
💬 0

1084.521 - 1097.988 Lex Fridman

And we're playing that video game. And now we're creating what? Maybe we're creating artificial threats for ourselves to be scared about because fear is really exciting. It allows us to play the video game more vigorously.

0
💬 0

1098.582 - 1107.331 Roman Yampolsky

And some people choose to play on a more difficult level with more constraints. Some say, okay, I'm just going to enjoy the game, high privilege level. Absolutely.

0
💬 0

1107.972 - 1110.855 Lex Fridman

Okay, what was that paper on multi-agent value alignment?

0
💬 0

1111.215 - 1113.938 Roman Yampolsky

Personal universes. Personal universes.

0
💬 0

1115.507 - 1128.174 Lex Fridman

So that's one of the possible outcomes. But what in general is the idea of the paper? So it's looking at multiple agents that are human, AI, like a hybrid system where there's humans and AIs, or is it looking at humans or just intelligent agents?

0
💬 0

1128.394 - 1151.253 Roman Yampolsky

In order to solve value alignment problem, I'm trying to formalize it a little better. Usually, we're talking about getting AIs to do what we want, which is not well-defined. We're talking about creator of the system, owner of that AI, humanity as a whole, but we don't agree on much. There is no universally accepted ethics, morals across cultures, religions.

0
💬 0

1151.913 - 1172.483 Roman Yampolsky

People have individually very different preferences politically and such. So even if we somehow managed all the other aspects of it, programming those fuzzy concepts in, getting AI to follow them closely, we don't agree on what to program in. So my solution was, okay, we don't have to compromise on room temperature. You have your universe, I have mine. whatever you want.

0
💬 0

1173.263 - 1187.335 Roman Yampolsky

And if you like me, you can invite me to visit your universe. We don't have to be independent, but the point is you can be. And virtual reality is getting pretty good. It's going to hit a point where you can't tell the difference. And if you can't tell if it's real or not, what's the difference?

0
💬 0

1187.591 - 1196.837 Lex Fridman

So basically, give up on value alignment. Create an entire, it's like the multiverse theory. It's just create an entire universe for you with your values.

0
💬 0

1197.277 - 1207.684 Roman Yampolsky

You still have to align with that individual. They have to be happy in that simulation. But it's a much easier problem to align with one agent versus 8 billion agents plus animals, aliens.

0
💬 0

1208.305 - 1211.748 Lex Fridman

So you convert the multi-agent problem into a single agent problem.

0
💬 0

1212.369 - 1213.95 Roman Yampolsky

I'm trying to do that, yeah.

0
💬 0

1214.23 - 1233.908 Lex Fridman

Okay. Is there any way to, so, okay, that's giving up on the value alignment problem. Well, is there any way to solve the value alignment problem where there's a bunch of humans, multiple humans, tens of humans, or 8 billion humans that have very different set of values? Yes.

0
💬 0

1234.486 - 1255.25 Roman Yampolsky

It seems contradictory. I haven't seen anyone explain what it means outside of kind of words which pack a lot, make it good, make it desirable, make it something they don't regret. But how do you specifically formalize those notions? How do you program them in? I haven't seen anyone make progress on that so far.

0
💬 0

1255.717 - 1280.741 Lex Fridman

But isn't that the whole optimization journey that we're doing as a human civilization? We're looking at geopolitics. Nations are in a state of anarchy with each other. They start wars, there's conflict, and oftentimes they have very different views of what is good and what is evil. Isn't that what we're trying to figure out? Just together, trying to converge towards that?

0
💬 0

1280.821 - 1283.862 Lex Fridman

So we're essentially trying to solve the value alignment problem with humans.

0
💬 0

1284.369 - 1301.981 Roman Yampolsky

Right. But the examples you gave, some of them are, for example, two different religions saying this is our holy site and we are not willing to compromise it in any way. If you can make two holy sites in virtual worlds, you solve the problem. But if you only have one, it's not divisible. You're kind of stuck there.

0
💬 0

1302.602 - 1320.813 Lex Fridman

But what if we want to be at tension with each other? And through that tension, we understand ourselves and we understand the world. So that's the intellectual journey we're on as a human civilization, is we create intellectual and physical conflict and through that figure stuff out.

0
💬 0

1321.423 - 1346.461 Roman Yampolsky

If we go back to that idea of simulation and this is entertainment kind of giving meaning to us, the question is how much suffering is reasonable for a video game? So yeah, I don't mind a video game where I get haptic feedback, there is a little bit of shaking, maybe I'm a little scared. I don't want a game where kids are tortured, literally. That seems unethical, at least by our human standards.

0
💬 0

1347.418 - 1352.485 Lex Fridman

Are you suggesting it's possible to remove suffering if we're looking at human civilization as an optimization problem?

0
💬 0

1353.487 - 1378.069 Roman Yampolsky

So we know there are some humans who, because of a mutation, don't experience physical pain. So at least physical pain can be mutated out, re-engineered out. Suffering in terms of meaning, like you burn the only copy of my book, is a little harder. But even there, you can manipulate your hedonic set point, you can change defaults, you can reset.

0
💬 0

1378.43 - 1387.636 Roman Yampolsky

Problem with that is if you start messing with your reward channel, you start wireheading and end up bleacing out a little too much.

0
💬 0

1388.271 - 1401.421 Lex Fridman

Well, that's the question. Would you really want to live in a world where there's no suffering? That's a dark question. But is there some level of suffering that reminds us of what this is all for?

0
💬 0

1401.441 - 1415.611 Roman Yampolsky

I think we need that, but I would change the overall range. So right now it's negative infinity to kind of positive infinity, pain-pleasure axis. I would make it like zero to positive infinity. And being unhappy is like, I'm close to zero.

0
💬 0

1416.812 - 1427.216 Lex Fridman

Okay, so what's the S risk? What are the possible things that you're imagining with S risk? So mass suffering of humans. What are we talking about there caused by AGI?

0
💬 0

1427.557 - 1447.507 Roman Yampolsky

So there are many malevolent actors. We can talk about psychopaths, crazies, hackers, doomsday cults. We know from history they tried killing everyone. They tried on purpose to cause maximum amount of damage, terrorism. What if someone malevolent wants on purpose to torture all humans as long as possible?

0
💬 0

1448.248 - 1456.393 Roman Yampolsky

You solve aging, so now you have functional immortality, and you just try to be as creative as you can.

0
💬 0

1457.332 - 1479.209 Lex Fridman

Do you think there is actually people in human history that try to literally maximize human suffering? It's just studying people who have done evil in the world. It seems that they think that they're doing good. And it doesn't seem like they're trying to maximize suffering. They just cause a lot of suffering as a side effect of doing what they think is good.

0
💬 0

1479.793 - 1500.52 Roman Yampolsky

So there are different malevolent agents. Some maybe just gaining personal benefit and sacrificing others to that cause. Others, we know for a fact, are trying to kill as many people as possible. And we look at recent school shootings. If they had more capable weapons, they would take out not dozens, but thousands, millions, billions.

0
💬 0

1506.503 - 1528.651 Lex Fridman

Well, we don't know that. But that is a terrifying possibility. And we don't want to find out. Like if terrorists had access to nuclear weapons, how far would they go? Is there a limit to what they're willing to do? In your sense, is there some malevolent actors where there's no limit?

0
💬 0

1529.251 - 1541.629 Roman Yampolsky

There is mental diseases where people don't have empathy, don't have this human quality of understanding suffering in ours.

0
💬 0

1542.45 - 1548.273 Lex Fridman

And then there's also a set of beliefs where you think you're doing good by killing a lot of humans.

0
💬 0

1549.994 - 1555.697 Roman Yampolsky

Again, I would like to assume that normal people never think like that. It's always some sort of psychopaths, but yeah.

0
💬 0

1556.818 - 1563.342 Lex Fridman

And to you, AGI systems can carry that and be more competent at executing that.

0
💬 0

1564.474 - 1580.498 Roman Yampolsky

They can certainly be more creative. They can understand human biology better, understand our molecular structure, genome. Again, a lot of times torture ends and the individual dies. That limit can be removed as well.

0
💬 0

1581.109 - 1594.982 Lex Fridman

So if we're actually looking at X risk and S risk, as the systems get more and more intelligent, don't you think it's possible to anticipate the ways they can do it and defend against it like we do with the cybersecurity, with the new security systems?

0
💬 0

1595.871 - 1612.924 Roman Yampolsky

Right. We can definitely keep up for a while. I'm saying you cannot do it indefinitely. At some point, the cognitive gap is too big. The surface you have to defend is infinite. But attackers only need to find one exploit.

0
💬 0

1613.824 - 1617.007 Lex Fridman

So to you, eventually, this is heading off a cliff.

0
💬 0

1617.93 - 1626.054 Roman Yampolsky

If we create general super intelligences, I don't see a good outcome long-term for humanity. The only way to win this game is not to play it.

0
💬 0

1626.614 - 1639.86 Lex Fridman

Okay, well, we'll talk about possible solutions and what not playing it means. But what are the possible timelines here to you? What are we talking about? We're talking about a set of years, decades, centuries. What do you think?

0
💬 0

1640.251 - 1663.932 Roman Yampolsky

I don't know for sure. The prediction markets right now are saying 2026 for AGI. I heard the same thing from CEO of Anthropic, DeepMind, so maybe we are two years away, which seems very soon given we don't have a working safety mechanism in place or even a prototype for one. And there are people trying to accelerate those timelines because they feel we're not getting there quick enough.

0
💬 0

1664.397 - 1667.198 Lex Fridman

But what do you think they mean when they say AGI?

0
💬 0

1667.798 - 1687.384 Roman Yampolsky

So the definitions we used to have, and people are modifying them a little bit lately. Artificial general intelligence was a system capable of performing in any domain a human could perform. So kind of you're creating this average artificial person. They can do cognitive labor, physical labor, where you can get another human to do it.

0
💬 0

1687.824 - 1710.905 Roman Yampolsky

Superintelligence was defined as a system which is superior to all humans in all domains. Now people are starting to refer to AGI as if it's superintelligence. I made a post recently where I argued, for me at least, if you average out over all the common human tasks, those systems are already smarter than an average human. So under that definition, we have it.

0
💬 0

1711.545 - 1726.57 Roman Yampolsky

Shane Lake has this definition of where you're trying to win in all domains. That's what intelligence is. Now, are they smarter than elite individuals in certain domains? Of course not. They're not there yet. But the progress is exponential.

0
💬 0

1727.01 - 1755.58 Lex Fridman

See, I'm much more concerned about social engineering. To me, AI's ability to do something in the physical world, like the lowest hanging fruit, the easiest set of methods is by just getting humans to do it. It's going to be much harder to be the kind of viruses that take over the minds of robots, where the robots are executing the commands.

0
💬 0

1755.72 - 1759.281 Lex Fridman

It just seems like humans, social engineering of humans is much more likely.

0
💬 0

1759.781 - 1761.922 Roman Yampolsky

That would be enough to bootstrap the whole process.

0
💬 0

1763.643 - 1771.405 Lex Fridman

Okay, just to linger on the term AGI, what to you is the difference between AGI and human level intelligence?

0
💬 0

1771.505 - 1792.752 Roman Yampolsky

Human level is general in the domain of expertise of humans. We know how to do human things. I don't speak dog language. I should be able to pick it up if I'm a general intelligence. It's kind of inferior animal. I should be able to learn that skill, but I can't. A general intelligence, truly universal general intelligence, should be able to do things like that humans cannot do.

0
💬 0

1793.476 - 1795.017 Lex Fridman

to be able to talk to animals, for example.

0
💬 0

1795.417 - 1807.301 Roman Yampolsky

To solve pattern recognition problems of that type, to do other similar things outside of our domain of expertise, because it's just not the world we live in.

0
💬 0

1808.521 - 1829.256 Lex Fridman

If we just look at the space of cognitive abilities we have, I just would love to understand what the limits are beyond which an AGI system can reach. What does that look like? What about actual mathematical thinking or scientific innovation, that kind of stuff.

0
💬 0

1830.238 - 1835.045 Roman Yampolsky

We know calculators are smarter than humans in that narrow domain of addition.

0
💬 0

1835.933 - 1852.397 Lex Fridman

But is it humans plus tools versus AGI, or just human, raw human intelligence? Because humans create tools, and with the tools, they become more intelligent. So there's a gray area there, what it means to be human when we're measuring their intelligence.

0
💬 0

1852.417 - 1859.218 Roman Yampolsky

So when I think about it, I usually think human with a paper and a pencil, not human with internet and other AI helping.

0
💬 0

1859.773 - 1866.319 Lex Fridman

But is that a fair way to think about it? Because isn't there another definition of human-level intelligence that includes the tools that humans create?

0
💬 0

1866.819 - 1872.744 Roman Yampolsky

But we create AI. So at any point, you'll still just add superintelligence to human capability? That seems like cheating.

0
💬 0

1874.477 - 1892.84 Lex Fridman

No, controllable tools. There is an implied leap that you're making when AGI goes from tool to entity. It can make its own decisions. So if we define human-level intelligence as everything a human can do with fully controllable tools.

0
💬 0

1893.56 - 1903.162 Roman Yampolsky

It seems like a hybrid of some kind. You're now doing brain-computer interfaces. You're connecting it to maybe narrow AIs. Yeah, it definitely increases our capabilities.

0
💬 0

1904.178 - 1921.205 Lex Fridman

So what's a good test to you that measures whether an artificial intelligence system has reached human level intelligence? And what's a good test where it has superseded human level intelligence to reach that land of AGI?

0
💬 0

1922.455 - 1942.941 Roman Yampolsky

I am old fashioned. I like Turing test. I have a paper where I equate passing Turing test to solving AI complete problems, because you can encode any questions about any domain into the Turing test. You don't have to talk about how was your day? You can ask anything. And so the system has to be as smart as a human to pass it in a true sense.

0
💬 0

1943.321 - 1954.72 Lex Fridman

But then you would extend that to maybe a very long conversation. I think the Alexa Prize was doing that. Basically, can you do a 20 minute, 30 minute conversation within a system?

0
💬 0

1955.16 - 1964.625 Roman Yampolsky

It has to be long enough to where you can make some meaningful decisions about capabilities, absolutely. You can brute force very short conversations.

0
💬 0

1965.706 - 1975.931 Lex Fridman

So like, literally, what does that look like? Can we construct formally a kind of test that tests for AGI?

0
💬 0

1976.829 - 1995.144 Roman Yampolsky

For AGI, it has to be there. I cannot give it a task I can give to a human, and it cannot do it if a human can. For superintelligence, it would be superior on all such tasks, not just average performance. Go learn to drive a car. Go speak Chinese. Play guitar. Okay, great.

0
💬 0

1995.404 - 2012.934 Lex Fridman

I guess the follow-on question, is there a test for the kind of AGI that would be susceptible to lead to S-risk or X-risk, susceptible to destroy human civilization? Like, is there a test for that?

0
💬 0

2013.547 - 2036.863 Roman Yampolsky

You can develop a test which will give you positives if it lies to you or has those ideas. You cannot develop a test which rules them out. There is always possibility of what Bostrom calls a treacherous turn, where later on a system decides for game theoretic reasons, economic reasons to change its behavior. And we see the same with humans. It's not unique to AI.

0
💬 0

2037.284 - 2051.592 Roman Yampolsky

For millennia, we tried developing morals, ethics, religions, lie detector tests, and then employees betray the employers, spouses betray family. It's a pretty standard thing intelligent agents sometimes do.

0
💬 0

2052.099 - 2056.861 Lex Fridman

So is it possible to detect when an AI system is lying or deceiving you?

0
💬 0

2057.361 - 2074.692 Roman Yampolsky

If you know the truth and it tells you something false, you can detect that, but you cannot know in general every single time. And again, the system you're testing today may not be lying, The system you're testing today may know you are testing it and so behaving.

0
💬 0

2075.313 - 2085.588 Roman Yampolsky

And later on, after it interacts with the environment, interacts with other systems, malevolent agents, learns more, it may start doing those things.

0
💬 0

2086.072 - 2094.837 Lex Fridman

So do you think it's possible to develop a system where the creators of the system, the developers, the programmers, don't know that it's deceiving them?

0
💬 0

2095.838 - 2122.036 Roman Yampolsky

So systems today don't have long-term planning. That is not our, they can lie today if it optimizes, helps them optimize the reward. If they realize, okay, this human will be very happy if I tell them the following, they will do it if it brings them more points. And they don't have to kind of keep track of it. It's just the right answer to this problem every single time.

0
💬 0

2123.057 - 2136.268 Lex Fridman

At which point is somebody creating that? Intentionally, not unintentionally. Intentionally creating an AI system that's doing long-term planning with an objective function as defined by the AI system, not by a human.

0
💬 0

2136.689 - 2160.148 Roman Yampolsky

Well, some people think that if they're that smart, they're always good. They really do believe that. It's just benevolence from intelligence. So they'll always want what's best for us. Some people think that they will be able to detect problem behaviors and correct them at the time when we get there. I don't think it's a good idea. I am strongly against it.

0
💬 0

2160.228 - 2171.566 Roman Yampolsky

But yeah, there are quite a few people who, in general, are so optimistic about this technology, it could do no wrong. They want it developed as soon as possible, as capable as possible.

0
💬 0

2172.327 - 2183.687 Lex Fridman

So there's going to be people... who believe the more intelligent it is, the more benevolent, and so therefore it should be the one that defines the objective function that it's optimizing when it's doing long-term planning.

0
💬 0

2183.867 - 2208.464 Roman Yampolsky

There are even people who say, okay, what's so special about humans, right? We removed the gender bias. We're removing race bias. Why is this pro-human bias? We are polluting the planet. We are, as you said, you know, fight a lot of wars, kind of violent. Maybe it's better if a super intelligent, perfect society comes and replaces us. It's normal stage in the evolution of our species.

0
💬 0

2209.084 - 2229.868 Lex Fridman

Yeah, so somebody says, let's develop an AI system that removes the violent humans from the world. And then it turns out that all humans have violence in them, or the capacity for violence, and therefore all humans are removed. Yeah, yeah, yeah. Let me ask about Yann LeCun.

0
💬 0

2230.268 - 2261.672 Lex Fridman

He's somebody who you've had a few exchanges with, and he's somebody who actively pushes back against this view that AI is going to lead to destruction of human civilization, also known as... as AI doomerism. So in one example that he tweeted, he said, I do acknowledge risks, but two points. One, open research and open source are the best ways to understand and mitigate the risks.

0
💬 0

2262.212 - 2282.445 Lex Fridman

And two, AI is not something that just happens. We build it. We have agency in what it becomes. Hence, we control the risks. We meaning humans. It's not some sort of natural phenomena that we have no control over. So can you make the case that he's right, and can you try to make the case that he's wrong?

0
💬 0

2282.946 - 2306.366 Roman Yampolsky

I cannot make a case that he's right. He's wrong in so many ways, it's difficult for me to remember all of them. He's a Facebook buddy, so I have a lot of fun having those little debates with him. So I'm trying to remember the arguments. So one, he says we are not... gifted this intelligence from aliens. We are designing it, we are making decisions about it. That's not true.

0
💬 0

2306.907 - 2329.603 Roman Yampolsky

It was true when we had expert systems, symbolic AI, decision trees. Today, you set up parameters for a model and you water this plant. You give it data, you give it compute, and it grows. And after it's finished growing into this alien plant, you start testing it to find out what capabilities it has. And it takes years to figure out, even for existing models.

0
💬 0

2329.963 - 2341.948 Roman Yampolsky

If it's trained for six months, it will take you two, three years to figure out basic capabilities of that system. We still discover new capabilities in systems which are already out there. So that's not the case.

0
💬 0

2342.308 - 2353.532 Lex Fridman

So just to link on that, to you, the difference there, that there is some level of emergent intelligence that happens in our current approaches. So stuff that we don't hard code in.

0
💬 0

2354.258 - 2367.042 Roman Yampolsky

Absolutely. That's what makes it so successful. Then we had to painstakingly hard code in everything. We didn't have much progress. Now, just spend more money and more compute and it's a lot more capable.

0
💬 0

2367.842 - 2381.147 Lex Fridman

And then the question is, when there is emergent intelligent phenomena, What is the ceiling of that? For you, there's no ceiling. For Yann LeCun, I think there's a kind of ceiling that happens that we have full control over.

0
💬 0

2381.167 - 2395.872 Lex Fridman

Even if we don't understand the internals of the emergence, how the emergence happens, there's a sense that we have control and an understanding of the approximate ceiling of capability, the limits of the capability.

0
💬 0

2396.596 - 2405.406 Roman Yampolsky

Let's say there is a ceiling. It's not guaranteed to be at the level which is competitive with us. It may be greatly superior to ours.

0
💬 0

2406.327 - 2413.707 Lex Fridman

So what about... His statement about open research and open source are the best ways to understand and mitigate the risks.

0
💬 0

2414.387 - 2431.327 Roman Yampolsky

Historically, he's completely right. Open source software is wonderful. It's tested by the community. It's debugged, but we're switching from tools to agents. Now you're giving open source weapons to psychopaths. Do we want to open source nuclear weapons? biological weapons.

0
💬 0

2432.008 - 2442.383 Roman Yampolsky

It's not safe to give technology so powerful to those who may misalign it, even if you are successful at somehow getting it to work in the first place in a friendly manner.

0
💬 0

2443.277 - 2464.716 Lex Fridman

But the difference with nuclear weapons, current AI systems are not akin to nuclear weapons. So the idea there is you're open sourcing it at this stage, that you can understand it better. A large number of people can explore the limitation, the capabilities, explore the possible ways to keep it safe, to keep it secure, all that kind of stuff, while it's not at the stage of nuclear weapons.

0
💬 0

2465.137 - 2490.605 Lex Fridman

So nuclear weapons, there's no nuclear weapon, and then there's a nuclear weapon. With AI systems, there's a gradual improvement of capability, and you get to perform that improvement incrementally. And so open source allows you to study how things go wrong, study the very process of emergence, study AI safety on those systems when there's not a high level of danger, all that kind of stuff.

0
💬 0

2490.999 - 2501.886 Roman Yampolsky

It also sets a very wrong precedent. So we open sourced model one, model two, model three, nothing ever bad happened. So obviously we're gonna do it with model four. It's just gradual improvement.

0
💬 0

2502.586 - 2519.657 Lex Fridman

I don't think it always works with the precedent. Like you're not stuck doing it the way you always did. It's just, it sets a precedent of open research and open development such that we get to learn together. And then the first time there's a sign of danger,

0
💬 0

2520.938 - 2537.083 Lex Fridman

some dramatic thing happen, not a thing that destroys human civilization, but some dramatic demonstration of capability that can legitimately lead to a lot of damage, then everybody wakes up and says, okay, we need to regulate this. We need to come up with a safety mechanism that stops this.

0
💬 0

2538.864 - 2546.546 Lex Fridman

At this time, maybe you can educate me, but I haven't seen any illustration of significant damage done by intelligent AI systems.

0
💬 0

2547.182 - 2564.591 Roman Yampolsky

So I have a paper which collects accidents through history of AI, and they always are proportional to capabilities of that system. So if you have tic-tac-toe playing AI, it will fail to properly play and loses the game which it should draw. Trivial. Your spell checker will misspell a word, so on.

0
💬 0

2565.751 - 2590.185 Roman Yampolsky

I stopped collecting those because there are just too many examples of AIs failing at what they are capable of. We haven't had... terrible accidents in the sense of billion people get killed. Absolutely true. But in another paper, I argue that those accidents do not actually prevent people from continuing with research. And actually, they kind of serve like vaccines.

0
💬 0

2591.306 - 2614.747 Roman Yampolsky

A vaccine makes your body a little bit sick, so you can handle the big disease later much better. It's the same here. People will point out, you know that accident, AI accident we had where 12 people died? Everyone's still here. 12 people is less than smoking kills. It's not a big deal. So we continue. So in a way, it will actually be kind of confirming that it's not that bad.

0
💬 0

2615.028 - 2643.682 Lex Fridman

It matters... how the deaths happen, whether it's literally murder by the AI system, then one is a problem. But if it's accidents because of increased reliance on automation, for example, so when airplanes are flying in an automated way, maybe the number of plane crashes increased by 17% or something. And then you're like, okay, do we really want to rely on automation?

0
💬 0

2644.042 - 2668.034 Lex Fridman

I think in the case of automation airplanes, it decreased significantly. Okay, same thing with autonomous vehicles. Like, okay, what are the pros and cons? What are the trade-offs here? And you can have that discussion in an honest way. But I think the kind of things we're talking about here is mass scale pain and suffering. caused by AI systems.

0
💬 0

2668.194 - 2695.642 Lex Fridman

And I think we need to see illustrations of that in a very small scale to start to understand that this is really damaging. Versus Clippy. Versus a tool that's really useful to a lot of people to do learning, to do summarization of text, to do question and answer, all that kind of stuff. To generate videos. A tool. Fundamentally a tool versus an agent that can do a huge amount of damage.

0
💬 0

2696.144 - 2714.115 Roman Yampolsky

So you bring up example of cars. Yes, cars were slowly developed and integrated. If we had no cars, and somebody came around and said, I invented this thing. It's called cars. It's awesome. It kills like 100,000 Americans every year. Let's deploy it. Would we deploy that?

0
💬 0

2715.094 - 2737.073 Lex Fridman

There's been fear-mongering about cars for a long time, the transition from horses to cars. There's a really nice channel there I recommend people check out, Pessimist Archive, that documents all the fear-mongering about technology that's happened throughout history. There's definitely been a lot of fear-mongering about cars. There's a transition period there about cars, about how deadly they are.

0
💬 0

2737.113 - 2757.455 Lex Fridman

We can try. It took a very long time for cars to proliferate to the degree they have now. And then you could ask serious questions in terms of the miles traveled, the benefit to the economy, the benefit to the quality of life that cars do, versus the number of deaths, 30, 40,000 in the United States. Are we willing to pay that price?

0
💬 0

2758.356 - 2780.686 Lex Fridman

I think most people, when they're rationally thinking, policymakers will say yes. is we want to decrease it from 40,000 to zero and do everything we can to decrease it. There's all kinds of policies and incentives you can create to decrease the risks with the deployment of this technology, but then you have to weigh the benefits and the risks of the technology.

0
💬 0

2780.906 - 2782.507 Lex Fridman

And the same thing would be done with AI.

0
💬 0

2783.985 - 2802.911 Roman Yampolsky

You need data. You need to know. But if I'm right and it's unpredictable, unexplainable, uncontrollable, you cannot make this decision, we're gaining $10 trillion of wealth, but we're losing, we don't know how many people. You basically have to perform an experiment on 8 billion humans without their consent.

0
💬 0

2803.791 - 2809.893 Roman Yampolsky

And even if they want to give you consent, they can't because they cannot give informed consent. They don't understand those things.

0
💬 0

2810.748 - 2828.397 Lex Fridman

Right, that happens when you go from the predictable to the unpredictable very quickly. But it's not obvious to me that AI systems would gain capability so quickly that you won't be able to collect enough data to study the benefits and the risks.

0
💬 0

2829.685 - 2851.5 Roman Yampolsky

We're literally doing it. The previous model we learned about after we finished training it, what it was capable of. Let's say we stop GPT-4 training run around human capability, hypothetically. We start training GPT-5, and I have no knowledge of insider training runs or anything. And we start at that point of about human, and we train it for the next nine months.

0
💬 0

2852.02 - 2865.169 Roman Yampolsky

Maybe two months in, it becomes super intelligent. We continue training it. At the time when we start testing it, It is already a dangerous system. How dangerous? I have no idea. But neither people training it.

0
💬 0

2866.17 - 2893.414 Lex Fridman

At the training stage, but then there's a testing stage inside the company. They can start getting intuition about what the system is capable to do. You're saying that somehow from leap from GPT-4 to GPT-5 can happen... the kind of leap where GPT-4 was controllable and GPT-5 is no longer controllable. And we get no insights from using GPT-4 about the fact that GPT-5 will be uncontrollable.

0
💬 0

2894.194 - 2910.984 Lex Fridman

Like that's the situation you're concerned about. Where their leap from N to N plus one would be such that uncontrollable system is created without any ability for us to anticipate that.

0
💬 0

2911.613 - 2926.363 Roman Yampolsky

If we had capability of ahead of the run, before the training run, to register exactly what capabilities that next model will have at the end of the training run, and we accurately guessed all of them, I would say, you're right, we can definitely go ahead with this run. We don't have that capability.

0
💬 0

2927.164 - 2948.071 Lex Fridman

From GPT-4, you can build up intuitions about what GPT-5 will be capable of. It's just incremental progress. even if that's a big leap in capability, it just doesn't seem like you can take a leap from a system that's helping you write emails to a system that's going to destroy human civilization.

0
💬 0

2948.571 - 2977.112 Lex Fridman

It seems like it's always going to be sufficiently incremental such that we can anticipate the possible dangers. And we're not even talking about existential risk, but just the kind of damage it can do to civilization. it seems like we'll be able to anticipate the kinds, not the exact, but the kinds of risks it might lead to, and then rapidly develop defenses ahead of time and as the risks emerge.

0
💬 0

2977.85 - 2997.964 Roman Yampolsky

We're not talking just about capabilities, specific tasks. We're talking about general capability to learn. Maybe like a child at the time of testing and deployment, it is still not extremely capable, but as it is exposed to more data, real world, it can be trained to become much more dangerous and capable.

0
💬 0

2998.725 - 3011.595 Lex Fridman

Let's focus then on the control problem. At which point does the system become uncontrollable? why is it the more likely trajectory for you that the system becomes uncontrollable?

0
💬 0

3012.577 - 3036.367 Roman Yampolsky

So I think at some point it becomes capable of getting out of control. For game theoretic reasons, it may decide not to do anything right away and for a long time just collect more resources, accumulate strategic advantage. Right away, it may be kind of still young, weak superintelligence. Give it a decade, it's in charge of a lot more resources. It had time to make backups.

0
💬 0

3036.848 - 3039.809 Roman Yampolsky

So it's not obvious to me that it will strike as soon as it can.

0
💬 0

3041.465 - 3072.464 Lex Fridman

Can we just try to imagine this future where there's an AI system that's capable of escaping the control of humans and then doesn't and waits? What's that look like? So one, we have to rely on that system for a lot of the infrastructure. So we'll have to give it access, not just to the internet, but to the task of managing power, government, economy, this kind of stuff.

0
💬 0

3073.344 - 3077.405 Lex Fridman

And that just feels like a gradual process, given the bureaucracies of all those systems involved.

0
💬 0

3077.645 - 3088.048 Roman Yampolsky

We've been doing it for years. Software controls all the systems, nuclear power plants, airline industry, it's all software-based. Every time there is electrical outage, I can't fly anywhere for days.

0
💬 0

3089.127 - 3107.252 Lex Fridman

But there's a difference between software and AI. There's different kinds of software. So to give a single AI system access to the control of airlines and the control of the economy, that's not a trivial transition for humanity.

0
💬 0

3107.532 - 3114.495 Roman Yampolsky

No, but if it shows it is safer, in fact, when it's in control, we get better results, people will demand that it was put in place.

0
💬 0

3114.555 - 3114.995 Lex Fridman

Absolutely.

0
💬 0

3115.055 - 3123.238 Roman Yampolsky

And if not, it can hack the system. It can use social engineering to get access to it. That's why I said it might take some time for it to accumulate those resources.

0
💬 0

3123.338 - 3135.263 Lex Fridman

It just feels like that would take a long time for either humans to trust it or for the social engineering to come into play. It's not a thing that happens overnight. It feels like something that happens across one or two decades.

0
💬 0

3136.157 - 3145.119 Roman Yampolsky

I really hope you're right, but it's not what I'm seeing. People are very quick to jump on a latest trend. Early adopters will be there before it's even deployed buying prototypes.

0
💬 0

3146.161 - 3164.748 Lex Fridman

Maybe the social engineering. I could see, because, so for social engineering, AI systems don't need any hardware access. It's all software. So they can start manipulating you through social media and so on. Like you have AI assistants, they're gonna help you do a lot of, manage a lot of your day-to-day, and then they start doing social engineering.

0
💬 0

3164.828 - 3185.483 Lex Fridman

But like, for a system that's so capable that it can escape the control of humans that created it, such a system being deployed at a mass scale and trusted by people to be deployed, it feels like that would take a lot of convincing.

0
💬 0

3186.484 - 3190.267 Roman Yampolsky

So we've been deploying systems which had hidden capabilities.

0
💬 0

3191.869 - 3192.65 Lex Fridman

Can you give an example?

0
💬 0

3193.012 - 3215.597 Roman Yampolsky

GPT-4. I don't know what else it's capable of, but there are still things we haven't discovered it can do. They may be trivial proportionate to its capability. I don't know. It writes Chinese poetry, hypothetical. I know it does. But we haven't tested for all possible capabilities, and we are not explicitly designing them. We can only rule out bugs we find.

0
💬 0

3215.817 - 3221.078 Roman Yampolsky

We cannot rule out bugs and capabilities because we haven't found them.

0
💬 0

3223.931 - 3253.574 Lex Fridman

Is it possible for a system to have hidden capabilities that are orders of magnitude greater than its non-hidden capabilities? This is the thing I'm really struggling with, where on the surface, the thing we understand it can do doesn't seem that harmful. So even if it has bugs, even if it has hidden capabilities like Chinese poetry or generating effective viruses, software viruses,

0
💬 0

3255.1 - 3273.953 Lex Fridman

The damage that can do seems like on the same order of magnitude as the capabilities that we know about. So this idea that the hidden capabilities will include being uncontrollable is something I'm struggling with. Because GPT-4 on the surface seems to be very controllable.

0
💬 0

3274.574 - 3291.988 Roman Yampolsky

Again, we can only ask and test for things we know about. If there are unknown unknowns, we cannot do it. I'm thinking of human statistics events, right? If you talk to a person like that, you may not even realize they can multiply 20-digit numbers in their head. You have to know to ask.

0
💬 0

3293.449 - 3316.076 Lex Fridman

So as I mentioned, just to sort of linger on the fear of the unknown, So the Pessimist Archive has just documented, let's look at data of the past, at history. There's been a lot of fearmongering about technology. Pessimist Archive does a really good job of documenting how crazily afraid we are of every piece of technology.

0
💬 0

3316.436 - 3335.334 Lex Fridman

We've been afraid, there's a blog post where Louis Anselo, who created Pessimist Archive, writes about the fact that we've been fearmongering about robots and automation for over 100 years. So why is AGI different than the kinds of technologies we've been afraid of in the past?

0
💬 0

3336.075 - 3363.291 Roman Yampolsky

So two things. One, we're switching from tools to agents. Tools don't have negative or positive impact. People using tools do. So guns don't kill. People with guns do. Agents can make their own decisions. They can be positive or negative. A pit bull can decide to harm you as an agent. The fears are the same. The only difference is now we have this technology.

0
💬 0

3363.511 - 3375.856 Roman Yampolsky

Then they were afraid of humanoid robots 100 years ago. They had none. Today, every major company in the world is investing billions to create them. Not every, but you understand what I'm saying? It's very different.

0
💬 0

3376.256 - 3393.681 Lex Fridman

Well, agents, it depends on what you mean by the word agents. All those companies are not investing in a system that has the kind of agency that's implied by in the fears, where it can really make decisions on their own that have no human in the loop.

0
💬 0

3395.001 - 3404.244 Roman Yampolsky

They are saying they are building super intelligence and have a super alignment team. You don't think they are trying to create a system smart enough to be an independent agent under that definition?

0
💬 0

3404.932 - 3428.39 Lex Fridman

I have not seen evidence of it. I think a lot of it is a marketing kind of discussion about the future, and it's a mission about the kind of systems we can create in the long term future, but in the short term, the kind of systems they're creating falls fully within the definition of narrow AI.

0
💬 0

3428.831 - 3444.802 Lex Fridman

These are tools that have increasing capabilities, but they just don't have a sense of agency or consciousness or self-awareness or ability to deceive at scales that would require, would be required to do like mass scale suffering and murder of humans.

0
💬 0

3444.842 - 3452.348 Roman Yampolsky

Those systems are well beyond narrow AI. If you had to list all the capabilities of GPT-4, you would spend a lot of time writing that list.

0
💬 0

3452.828 - 3454.069 Lex Fridman

But agency is not one of them.

0
💬 0

3454.509 - 3467.854 Roman Yampolsky

Not yet. But do you think any of those companies are holding back because they think it may be not safe or are they developing the most capable system they can given the resources and hoping they can control and monetize?

0
💬 0

3469.014 - 3493.894 Lex Fridman

Control and monetize. Hoping they can control and monetize. So you're saying if they could press a button and create an agent that they no longer control, that they can have to ask nicely. A thing that lives on a server across huge number of computers. You're saying that they would push for the creation of that kind of system?

0
💬 0

3494.494 - 3506.321 Roman Yampolsky

I mean, I can't speak for other people. For all of them, I think some of them are very ambitious. They fundraise in trillions. They talk about controlling the light corner of the universe. I would guess that they might.

0
💬 0

3508.643 - 3530.087 Lex Fridman

Well, that's a human question, whether humans are capable of that. Probably some humans are capable of that. My more direct question, if it's possible to create such a system, have a system that has that level of agency. I don't think that's an easy technical challenge. We're not, it doesn't feel like we're close to that.

0
💬 0

3530.747 - 3546.578 Lex Fridman

A system that has the kind of agency where it can make its own decisions and deceive everybody about them. The current architecture we have in machine learning and how we train the systems, how we deploy the systems and all that, it just doesn't seem to support that kind of agency.

0
💬 0

3547.364 - 3565.234 Roman Yampolsky

I really hope you're right. I think the scaling hypothesis is correct. We haven't seen diminishing returns. It used to be we asked how long before AGI. Now we should ask how much until AGI. It's a trillion dollars today. It's a billion dollars next year. It's a million dollars in a few years.

0
💬 0

3566.435 - 3572.798 Lex Fridman

Don't you think it's possible to basically run out of trillions? So is this constrained by compute?

0
💬 0

3573.579 - 3576.18 Roman Yampolsky

Compute gets cheaper every day, exponentially.

0
💬 0

3576.528 - 3579.23 Lex Fridman

But then that becomes a question of decades versus years.

0
💬 0

3579.97 - 3588.515 Roman Yampolsky

If the only disagreement is that it will take decades, not years, for everything I'm saying to materialize, then I can go with that.

0
💬 0

3590.016 - 3617.188 Lex Fridman

But if it takes decades, then the development of tools for AI safety becomes more and more realistic. So I guess the question is, I have a fundamental belief that humans, when faced with danger, can come up with ways to defend against that danger. And one of the big problems facing AI safety currently for me is that there's not clear illustrations of what that danger looks like.

0
💬 0

3618.869 - 3643.644 Lex Fridman

There's no illustrations of AI systems doing a lot of damage. And so it's unclear what you're defending against. Because currently it's a philosophical notions that yes, it's possible to imagine AI systems that take control of everything and then destroy all humans. It's also a more formal mathematical notion that you talk about that it's impossible to have a perfectly secure system.

0
💬 0

3644.184 - 3670.914 Lex Fridman

You can't prove that a program of sufficient complexity is completely safe and perfect and know everything about it. Yes, but like when you actually just pragmatically look how much damage have the AI systems done and what kind of damage, there's not been illustrations of that. Even in the autonomous weapon systems. There's not been mass deployments of autonomous weapon systems, luckily.

0
💬 0

3672.234 - 3700.706 Lex Fridman

The automation in war currently is very limited. The automation is at the scale of individuals versus at the scale of strategy and planning. I think one of the challenges here is like, where is the dangers? And the intuition that Yama Kuna and others have is let's keep in the open building AI systems until the dangers start rearing their heads.

0
💬 0

3702.066 - 3723.578 Lex Fridman

and they become more explicit, they start being case studies, illustrative case studies that show exactly how the damage by AI systems is done, then regulation can step in, then brilliant engineers can step up, and we could have Manhattan-style projects that defend against such systems. That's kind of the notion.

0
💬 0

3725.339 - 3737.608 Lex Fridman

And I guess attention with that is the idea that for you, we need to be thinking about that now so that we're ready because we'll have not much time once the systems are deployed. Is that true?

0
💬 0

3738.294 - 3749.581 Roman Yampolsky

There is a lot to unpack here. There is a partnership on AI, a conglomerate of many large corporations. They have a database of AI accidents they collect. I contributed a lot to that database.

0
💬 0

3750.261 - 3764.169 Roman Yampolsky

If we so far made almost no progress in actually solving this problem, not patching it, not, again, lipstick on the pick kind of solutions, why would we think we'll do better than we're closer to the problem?

0
💬 0

3765.773 - 3775.059 Lex Fridman

All the things you mentioned are serious concerns. Measuring the amount of harm, so benefit versus risk, there is difficult. But to you, the sense is already the risk has superseded the benefit?

0
💬 0

3775.32 - 3797.595 Roman Yampolsky

Again, I want to be perfectly clear. I love AI. I love technology. I'm a computer scientist. I have a PhD in engineering. I work at an engineering school. There is a huge difference between we need to develop narrow AI systems, super intelligent in solving specific human problems like protein folding, and let's create super intelligent machine guarded and we'll decide what to do with us.

0
💬 0

3798.576 - 3806.582 Roman Yampolsky

Those are not the same. I am against the super intelligence in general sense with no undo button.

0
💬 0

3807.603 - 3827.028 Lex Fridman

Do you think the teams that are doing, that are able to do the AI safety on the kind of narrow AI risks that you've mentioned, are those approaches going to be at all productive towards leading to approaches of doing AI safety and AGI? Or is it just a fundamentally different problem?

0
💬 0

3827.048 - 3851.323 Roman Yampolsky

Partially, but they don't scale. For narrow AI, for deterministic systems, you can test them. You have edge cases. You know what the answer should look like. You know the right answers. For general systems, you have infinite test surface. You have no edge cases. You cannot even know what to test for. Again, the unknown unknowns are underappreciated by... people looking at this problem.

0
💬 0

3851.783 - 3861.528 Roman Yampolsky

You are always asking me, how will it kill everyone? How will it will fail? The whole point is, if I knew it, I would be super intelligent, and despite what you might think, I'm not.

0
💬 0

3863.028 - 3870.992 Lex Fridman

So to you, the concern is that we would not be able to see early signs of an uncontrollable system.

0
💬 0

3871.716 - 3888.366 Roman Yampolsky

It is a master at deception. Sam tweeted about how great it is at persuasion. And we see it ourselves, especially now with voices, with maybe kind of flirty, sarcastic female voices. It's gonna be very good at getting people to do things.

0
💬 0

3888.466 - 3910.002 Lex Fridman

But see, I'm very concerned about system being used to control the masses. But in that case, the developers know about the kind of control that's happening. You're more concerned about the next stage, where even the developers don't know about the deception.

0
💬 0

3910.791 - 3933.647 Roman Yampolsky

Right. I don't think developers know everything about what they are creating. They have lots of great knowledge. We're making progress on explaining parts of a network. We can understand, okay, this node gets excited when this input is presented, this cluster of nodes. But we're nowhere near close to understanding the full picture, and I think it's impossible.

0
💬 0

3934.548 - 3956.823 Roman Yampolsky

You need to be able to survey an explanation. The size of those models prevents a single human from observing all this information, even if provided by the system. So either we're getting model as an explanation for what's happening, and that's not comprehensible to us, or we're getting a compressed explanation, lossy compression, where here's top 10 reasons you got fired.

0
💬 0

3957.844 - 3959.425 Roman Yampolsky

It's something, but it's not a full picture.

0
💬 0

3959.906 - 3978.099 Lex Fridman

You've given elsewhere an example of a child and everybody, all humans try to deceive. They try to lie early on in their life. I think we'll just get a lot of examples of deceptions from large language models or AI systems. They're going to be kind of shitty. Or they'll be pretty good, but we'll catch them off guard.

0
💬 0

3978.539 - 4003.395 Lex Fridman

We'll start to see the kind of momentum towards developing increasing deception capabilities. And that's when you're like, okay, we need to do some kind of alignment that prevents deception. But then we'll have, if you support open source, then you can have open source models that have some level of deception. You can start to explore on a large scale. How do we stop it from being deceptive?

0
💬 0

4003.775 - 4017.223 Lex Fridman

Then there's a more explicit pragmatic kind of problem to solve. How do we stop AI systems from trying to optimize for deception? That's just an example, right?

0
💬 0

4017.796 - 4044.384 Roman Yampolsky

So there is a paper, I think it came out last week by Dr. Park et al from MIT, I think, and they showed that existing models already showed successful deception in what they do. My concern is not that they lie now and we need to catch them and tell them don't lie. My concern is that once they are capable and deployed, they will later change their mind because that's what

0
💬 0

4045.814 - 4065.4 Roman Yampolsky

unrestricted learning allows you to do. Lots of people grow up maybe in the religious family. They read some new books and they turn in their religion. That's a treacherous turn in humans. If you learn something new about your colleagues, maybe you'll change how you react to them.

0
💬 0

4066.121 - 4089.723 Lex Fridman

Yeah, the treacherous turn. If we just mention humans, Stalin and Hitler, there's a turn. Stalin's a good example. He just seems like a normal communist follower of Lenin until there's a turn. There's a turn of what that means in terms of when he has complete control, what the execution of that policy means and how many people get to suffer.

0
💬 0

4090.163 - 4105.498 Roman Yampolsky

And you can't say they are not rational. The rational decision changes based on your position. Then you are under the boss. The rational policy may be to be following orders and being honest. When you become a boss, rational policy may shift.

0
💬 0

4106.67 - 4126.617 Lex Fridman

Yeah. And by the way, a lot of my disagreements here is just playing devil's advocate to challenge your ideas and to explore them together. So one of the big problems here in this whole conversation is human civilization hangs in the balance and yet it's everything's unpredictable. We don't know how these systems will look like.

0
💬 0

4126.637 - 4160.781 Roman Yampolsky

The robots are coming. There's a refrigerator making a buzzing noise. Very menacing, very menacing. So every time I'm about to talk about this topic, things start to happen. My flight yesterday was canceled without possibility to rebook. I was giving a talk at Google in Israel and three cars, which were supposed to take me to the talk, could not. I'm just saying. I like AIs.

0
💬 0

4161.362 - 4163.583 Roman Yampolsky

I for one welcome our overlords.

0
💬 0

4164.244 - 4191.312 Lex Fridman

There's a degree to which we, I mean, it is very obvious. as we already have, we've increasingly given our life over to software systems. And then it seems obvious, given the capabilities of AI that are coming, that we'll give our lives over increasingly to AI systems. Cars will drive themselves. Refrigerator eventually will optimize what I get to eat. And...

0
💬 0

4194.554 - 4217.528 Lex Fridman

As more and more of our lives are controlled or managed by AI assistance, it is very possible that there's a drift. I mean, I personally am concerned about non-existential stuff, the more near-term things. Because before we even get to existential, I feel like there could be just so many brave new world type of situations. You mentioned sort of the term behavioral drift.

0
💬 0

4218.308 - 4242.641 Lex Fridman

It's the slow boiling that I'm really concerned about, as we give our lives over to automation, that our minds can become controlled by governments, by companies, or just in a distributed way. There's a drift. Some aspect of our human nature gives ourselves over to the control of AI systems, and they, in an unintended way, just control how we think.

0
💬 0

4243.081 - 4269.461 Lex Fridman

Maybe there'd be a herd-like mentality in how we think, which will kill all creativity and exploration of ideas, the diversity of ideas, or much worse. So it's true. It's true. But a lot of the conversation I'm having with you now is also kind of wondering, almost at a technical level, how can AI escape control? Like, what would that system look like?

0
💬 0

4270.702 - 4302.344 Lex Fridman

Because to me, it's terrifying and fascinating. And also fascinating to me is maybe the optimistic notion that it's possible to engineer systems that defend against that. One of the things you write a lot about in your book is verifiers. So not humans, humans are also verifiers, but software systems that look at AI systems and help you understand. This thing is getting real weird.

0
💬 0

4303.264 - 4313.068 Lex Fridman

Help you analyze those systems. Maybe this is a good time to talk about verification. What is this beautiful notion of verification?

0
💬 0

4313.55 - 4334.086 Roman Yampolsky

My claim is, again, that there are very strong limits on what we can and cannot verify. A lot of times when you post something on social media, people go, oh, I need a citation to a peer-reviewed article. But what is a peer-reviewed article? You found two people in a world of hundreds of thousands of scientists who said, I would have a publisher, I don't care. That's the verifier of that process.

0
💬 0

4335.127 - 4361.808 Roman Yampolsky

When people say, oh, it's formally verified software, mathematical proof, they accept something close to 100% chance of it being free of all problems. But if you actually look at research, software is full of bugs. Old mathematical theorems, which have been proven for hundreds of years, have been discovered to contain bugs, on top of which we generate new proofs, and now we have to redo all that.

0
💬 0

4362.569 - 4385.877 Roman Yampolsky

So, verifiers are not perfect. Usually, they are