Menu
Sign In Search Podcasts Charts People & Topics Add Podcast API Pricing
Podcast Image

Lex Fridman Podcast

#75 – Marcus Hutter: Universal Artificial Intelligence, AIXI, and AGI

26 Feb 2020

1h 40m duration
16186 words
3 speakers
26 Feb 2020
Description

Marcus Hutter is a senior research scientist at DeepMind and professor at Australian National University. Throughout his career of research, including with Jürgen Schmidhuber and Shane Legg, he has proposed a lot of interesting ideas in and around the field of artificial general intelligence, including the development of the AIXI model which is a mathematical approach to AGI that incorporates ideas of Kolmogorov complexity, Solomonoff induction, and reinforcement learning. EPISODE LINKS: Hutter Prize: http://prize.hutter1.net Marcus web: http://www.hutter1.net Books mentioned: - Universal AI: https://amzn.to/2waIAuw - AI: A Modern Approach: https://amzn.to/3camxnY - Reinforcement Learning: https://amzn.to/2PoANj9 - Theory of Knowledge: https://amzn.to/3a6Vp7x This conversation is part of the Artificial Intelligence podcast. If you would like to get more information about this podcast go to https://lexfridman.com/ai or connect with @lexfridman on Twitter, LinkedIn, Facebook, Medium, or YouTube where you can watch the video versions of these conversations. If you enjoy the podcast, please rate it 5 stars on Apple Podcasts, follow on Spotify, or support it on Patreon. This episode is presented by Cash App. Download it (App Store, Google Play), use code "LexPodcast".  Here's the outline of the episode. On some podcast players you should be able to click the timestamp to jump to that time. OUTLINE: 00:00 - Introduction 03:32 - Universe as a computer 05:48 - Occam's razor 09:26 - Solomonoff induction 15:05 - Kolmogorov complexity 20:06 - Cellular automata 26:03 - What is intelligence? 35:26 - AIXI - Universal Artificial Intelligence 1:05:24 - Where do rewards come from? 1:12:14 - Reward function for human existence 1:13:32 - Bounded rationality 1:16:07 - Approximation in AIXI 1:18:01 - Godel machines 1:21:51 - Consciousness 1:27:15 - AGI community 1:32:36 - Book recommendations 1:36:07 - Two moments to relive (past and future)

Audio
Transcription

Chapter 1: What is the main topic discussed in this episode?

0.031 - 17.263 Lex Fridman

The following is a conversation with Marcus Hutter, senior research scientist at Google DeepMind. Throughout his career of research, including with Juergen Schmidt-Huber and Shane Legg, he has proposed a lot of interesting ideas in and around the field of artificial general intelligence.

0

17.243 - 31.839 Lex Fridman

including the development of AIXI, spelled A-I-X-I model, which is a mathematical approach to AGI that incorporates ideas of Kolmogorov complexity, Solomonov induction, and reinforcement learning.

0

Chapter 2: What is the concept of the universe as a computer?

33.12 - 61.15 Lex Fridman

In 2006, Marcus launched the 50,000 Euro Hutter Prize for lossless compression of human knowledge. The idea behind this prize is that the ability to compress well is closely related to intelligence. This, to me, is a profound idea. Specifically, if you can compress the first 100 megabytes or one gigabyte of Wikipedia better than your predecessors, your compressor likely has to also be smarter.

0

62.272 - 81.089 Lex Fridman

The intention of this prize is to encourage the development of intelligent compressors as a path to AGI. In conjunction with this podcast release just a few days ago, Marcus announced a 10X increase in several aspects of this prize, including the money, to 500,000 euros.

0

Chapter 3: How does Occam's Razor apply to scientific theories?

82.731 - 107.292 Lex Fridman

The better your compressor works relative to the previous winners, the higher fraction of that prize money is awarded to you. You can learn more about it if you Google simply, Hutter Prize. I'm a big fan of benchmarks for developing AI systems, and the Hutter Prize may indeed be one that will spark some good ideas for approaches that will make progress on the path of developing AGI systems.

0

107.845 - 128.719 Lex Fridman

This is the Artificial Intelligence Podcast. If you enjoy it, subscribe on YouTube, give it five stars on Apple Podcasts, support it on Patreon, or simply connect with me on Twitter at Lex Friedman, spelled F-R-I-D-M-A-N. As usual, I'll do one or two minutes of ads now and never any ads in the middle that can break the flow of the conversation.

0

Chapter 4: What is Solomonoff induction and its significance?

129.3 - 154.339 Lex Fridman

I hope that works for you and doesn't hurt the listening experience. This show is presented by Cash App, the number one finance app in the App Store. When you get it, use code LEXPODCAST. Cash App lets you send money to friends, buy Bitcoin, and invest in the stock market with as little as $1. Broker services are provided by Cash App Investing, a subsidiary of Square, and member SIPC.

0

Chapter 5: How is Kolmogorov complexity defined and utilized?

154.319 - 171.558 Lex Fridman

Since Cash App allows you to send and receive money digitally, peer-to-peer, and security in all digital transactions is very important, let me mention the PCI data security standard that Cash App is compliant with. I'm a big fan of standards for safety and security.

0

172.139 - 195.661 Lex Fridman

PCI DSS is a good example of that, where a bunch of competitors got together and agreed that there needs to be a global standard around the security of transactions. Now, we just need to do the same for autonomous vehicles and AI systems in general. So again, if you get Cash App from the App Store or Google Play and use the code LEXPODCAST, you'll get $10.

0

196.102 - 206.491 Lex Fridman

And Cash App will also donate $10 to FIRST, one of my favorite organizations that is helping to advance robotics and STEM education for young people around the world.

0

Chapter 6: What insights can we gain from cellular automata?

207.712 - 236.288 Lex Fridman

And now, here's my conversation with Marcus Hutter. Do you think of the universe as a computer or maybe an information processing system? Let's go with a big question first.

0

236.689 - 259.251 Marcus Hutter

Okay, with a big question first. I think it's a very interesting hypothesis or idea. And I have a background in physics, so I know a little bit about physical theories, the standard model of particle physics and general relativity theory. And they are amazing and describe virtually everything in the universe. And they're all, in a sense, computable theories. I mean, they're very hard to compute.

0

Chapter 7: What defines intelligence in the context of AGI?

259.231 - 274.776 Marcus Hutter

and you know it's very elegant simple theories which describe virtually everything in the universe so there's a strong indication that somehow the universe is computable but it's a plausible hypothesis

0

275.228 - 300.263 Lex Fridman

What do you think, just like you said, general relativity, quantum field theory, what do you think that the laws of physics are so nice and beautiful and simple and compressible? Do you think our universe was designed, is naturally this way? Are we just focusing on the parts that are especially compressible? Are human minds just enjoy something about that simplicity?

0

300.403 - 303.868 Lex Fridman

And in fact, there's other things that are not so compressible.

0

304.203 - 322.087 Marcus Hutter

No, I strongly believe and I'm pretty convinced that the universe is inherently beautiful, elegant and simple and described by these equations. And we're not just picking that. I mean, if there were some phenomena which cannot be neatly described, Scientists would try that, right?

0

322.187 - 337.737 Marcus Hutter

And, you know, there's biology, which is more messy, but we understand that it's an emergent phenomena and, you know, it's complex systems, but they still follow the same rules, right, of quantum and electrodynamics. All of chemistry follows that, and we know that. I mean, we cannot compute everything because we have limited computational resources.

337.717 - 358.12 Marcus Hutter

No, I think it's not a bias of the humans, but it's objectively simple. I mean, of course, you never know, you know, maybe there's some corners very far out in the universe or super, super tiny below the nucleus of atoms or, well, parallel universes where, which are not nice and simple, but there's no evidence for that.

358.14 - 368.031 Marcus Hutter

And we should apply Occam's razor and, you know, choose the simplest tree consistent with it. But although it's a little bit self-referential. So maybe a quick pause. What is Occam's Razor?

368.572 - 390.977 Marcus Hutter

So Occam's Razor says that you should not multiply entities beyond necessity, which sort of, if you translate that into proper English, means, and, you know, in the scientific context, means that if you have two theories or hypotheses or models which equally well describe the phenomenon you're studying or the data, you should choose the more simple one.

390.957 - 412.041 Lex Fridman

So that's just the principle? Yes. That's not like a provable law, perhaps? Perhaps we'll kind of discuss it and think about it. But what's the intuition of why the simpler answer is the one that is likelier to be more correct descriptor of whatever we're talking about?

Chapter 8: How does the AIXI model represent universal artificial intelligence?

443.52 - 471.741 Marcus Hutter

You can just accept it, that is the principle of science, and we use this principle and it seems to be successful. We don't know why, but it just happens to be. Or you can try, you know, find another principle which explains Occam's razor. And if we start with the assumption that the world is governed by simple rules, then there's a bias towards simplicity and applying Occam's Razor

0

472.379 - 486.051 Marcus Hutter

is the mechanism to finding these rules. And actually in a more quantitative sense, and we come back to that later in case of somnolence reduction, you can rigorously prove that. If you assume that the world is simple, then Occam's razor is the best you can do in a certain sense.

0

486.592 - 504.709 Lex Fridman

So I apologize for the romanticized question, but why do you think, outside of its effectiveness, why do you think we find simplicity so appealing as human beings? Why does E equals MC squared seem so... beautiful to us humans?

0

505.971 - 536.389 Marcus Hutter

I guess mostly. In general, many things can be explained by an evolutionary argument. And, you know, there's some artifacts in humans which are just artifacts and not necessarily necessary. But with this beauty and simplicity, it's, I believe, At least the core is about, like science, finding regularities in the world, understanding the world, which is necessary for survival, right?

0

537.263 - 562.714 Marcus Hutter

a bush right and i just see noise and there is a tiger right and eats me then i'm dead but if i try to find a pattern and we know that humans are prone to um find more patterns in data than they are you know like the mars face and all these things um but this bias towards finding patterns even if they are non but i mean it's best of course if they are yeah helps us for survival

564.078 - 587.27 Lex Fridman

Yeah, that's fascinating. I haven't thought really about the... I thought I just loved science, but indeed, in terms of just survival purposes, there is an evolutionary argument for why we find the work of Einstein so beautiful. Maybe a quick small tangent. Could you describe what Solomonov induction is?

588.212 - 616.63 Marcus Hutter

Yeah, so that's a theory which... I claim, and Ray Solominov sort of claimed a long time ago, that this solves the big philosophical problem of induction. And I believe the claim is essentially true. And what it does is the following. So, okay, for the picky listener, induction can be interpreted narrowly and wildly. Narrow means inferring models from data.

618.095 - 641.691 Marcus Hutter

And widely means also then using these models for doing predictions. So prediction is also part of the induction. So I'm a little sloppy sort of with the terminology. And maybe that comes from Ray Solomon of, you know, being sloppy. Maybe I shouldn't say that. We can't complain anymore. So let me explain a little bit this theory in simple terms. So assume you have a data sequence.

641.851 - 662.682 Marcus Hutter

Make it very simple. The simplest one, say, 1, 1, 1, 1, 1, and you see 100 ones. What do you think comes next? The natural answer, I'm going to speed up a little bit. The natural answer is, of course, one. And the question is, why? Well, we see a pattern there. There's a one and we repeat it. And why should it suddenly after 100 ones be different?

Comments

There are no comments yet.

Please log in to write the first comment.