Menu
Sign In Search Podcasts Charts People & Topics Add Podcast API Blog Pricing

Marcus Hutter

๐Ÿ‘ค Speaker
See mentions of this person in podcasts
912 total appearances
Voice ID

Voice Profile Active

This person's voice can be automatically recognized across podcast episodes using AI voice matching.

Voice samples: 1
Confidence: Medium

Appearances Over Time

Podcast Appearances

Lex Fridman Podcast
#75 โ€“ Marcus Hutter: Universal Artificial Intelligence, AIXI, and AGI

I didn't explicitly talk about it, but this is used for universal prediction and plug it into the sequential decision tree mechanism.

Lex Fridman Podcast
#75 โ€“ Marcus Hutter: Universal Artificial Intelligence, AIXI, and AGI

And then you get the best of both worlds.

Lex Fridman Podcast
#75 โ€“ Marcus Hutter: Universal Artificial Intelligence, AIXI, and AGI

You have a long-term planning agent,

Lex Fridman Podcast
#75 โ€“ Marcus Hutter: Universal Artificial Intelligence, AIXI, and AGI

but it doesn't need to know anything about the world because the Solomon of Induction part learns.

Lex Fridman Podcast
#75 โ€“ Marcus Hutter: Universal Artificial Intelligence, AIXI, and AGI

So what it does is, in the simplest case, I said, take the shortest program describing your data, run it, have a prediction which would be deterministic.

Lex Fridman Podcast
#75 โ€“ Marcus Hutter: Universal Artificial Intelligence, AIXI, and AGI

Yes.

Lex Fridman Podcast
#75 โ€“ Marcus Hutter: Universal Artificial Intelligence, AIXI, and AGI

Okay.

Lex Fridman Podcast
#75 โ€“ Marcus Hutter: Universal Artificial Intelligence, AIXI, and AGI

But you should not just take the shortest program, but also consider the longer ones, but keep it lower a priori probability.

Lex Fridman Podcast
#75 โ€“ Marcus Hutter: Universal Artificial Intelligence, AIXI, and AGI

So in the Bayesian framework, you say a priori any distribution

Lex Fridman Podcast
#75 โ€“ Marcus Hutter: Universal Artificial Intelligence, AIXI, and AGI

which is a model or a stochastic program, has a certain a priori probability, which is 2 to the minus, and why 2 to the minus length, I could explain, length of this program.

Lex Fridman Podcast
#75 โ€“ Marcus Hutter: Universal Artificial Intelligence, AIXI, and AGI

So longer programs are punished a priori.

Lex Fridman Podcast
#75 โ€“ Marcus Hutter: Universal Artificial Intelligence, AIXI, and AGI

And then you multiply it with the so-called likelihood function, which is, as the name suggests, is how likely is this model given the data at hand.

Lex Fridman Podcast
#75 โ€“ Marcus Hutter: Universal Artificial Intelligence, AIXI, and AGI

So if you have a very wrong model, it's very unlikely that this model is true.

Lex Fridman Podcast
#75 โ€“ Marcus Hutter: Universal Artificial Intelligence, AIXI, and AGI

And so it is very small number.

Lex Fridman Podcast
#75 โ€“ Marcus Hutter: Universal Artificial Intelligence, AIXI, and AGI

So even if the model is simple, it gets penalized by that.

Lex Fridman Podcast
#75 โ€“ Marcus Hutter: Universal Artificial Intelligence, AIXI, and AGI

And what you do is then you take just the sum or this is the average over it.

Lex Fridman Podcast
#75 โ€“ Marcus Hutter: Universal Artificial Intelligence, AIXI, and AGI

And this gives you a probability distribution.

Lex Fridman Podcast
#75 โ€“ Marcus Hutter: Universal Artificial Intelligence, AIXI, and AGI

So it's universal distribution or Solomon of distribution.

Lex Fridman Podcast
#75 โ€“ Marcus Hutter: Universal Artificial Intelligence, AIXI, and AGI

It's a basic optimization problem.

Lex Fridman Podcast
#75 โ€“ Marcus Hutter: Universal Artificial Intelligence, AIXI, and AGI

What are we talking about?