Menu
Sign In Search Podcasts Charts People & Topics Add Podcast API Pricing

Marcus Hutter

๐Ÿ‘ค Person
912 total appearances
Voice ID

Voice Profile Active

This person's voice can be automatically recognized across podcast episodes using AI voice matching.

Voice samples: 1
Confidence: Medium

Appearances Over Time

Podcast Appearances

Lex Fridman Podcast
#75 โ€“ Marcus Hutter: Universal Artificial Intelligence, AIXI, and AGI

And after a while it predict, oh, the next coin flip will be head with probability 60%.

Lex Fridman Podcast
#75 โ€“ Marcus Hutter: Universal Artificial Intelligence, AIXI, and AGI

So it's the stochastic version of that.

Lex Fridman Podcast
#75 โ€“ Marcus Hutter: Universal Artificial Intelligence, AIXI, and AGI

Yes, yeah.

Lex Fridman Podcast
#75 โ€“ Marcus Hutter: Universal Artificial Intelligence, AIXI, and AGI

Well, in Solomon of Induction, precisely what you do is, so you combine, so looking for the shortest program is like applying Opaque's razor, like looking for the simplest theory.

Lex Fridman Podcast
#75 โ€“ Marcus Hutter: Universal Artificial Intelligence, AIXI, and AGI

There's also Epicurus' principle, which says, if you have multiple hypotheses, which equally well describe your data, don't discard any of them.

Lex Fridman Podcast
#75 โ€“ Marcus Hutter: Universal Artificial Intelligence, AIXI, and AGI

Keep all of them around, you never know.

Lex Fridman Podcast
#75 โ€“ Marcus Hutter: Universal Artificial Intelligence, AIXI, and AGI

And you can put that together and say, okay, I have a bias towards simplicity, but I don't rule out the larger models.

Lex Fridman Podcast
#75 โ€“ Marcus Hutter: Universal Artificial Intelligence, AIXI, and AGI

And technically what we do is we weigh the shorter models higher and the longer models lower.

Lex Fridman Podcast
#75 โ€“ Marcus Hutter: Universal Artificial Intelligence, AIXI, and AGI

And you use a Bayesian technique.

Lex Fridman Podcast
#75 โ€“ Marcus Hutter: Universal Artificial Intelligence, AIXI, and AGI

You have a prior, which is precisely two to the minus the complexity of the program.

Lex Fridman Podcast
#75 โ€“ Marcus Hutter: Universal Artificial Intelligence, AIXI, and AGI

And you weigh all this hypothesis and take this mixture, and then you get also this stochasticity in.

Lex Fridman Podcast
#75 โ€“ Marcus Hutter: Universal Artificial Intelligence, AIXI, and AGI

I essentially have already explained it.

Lex Fridman Podcast
#75 โ€“ Marcus Hutter: Universal Artificial Intelligence, AIXI, and AGI

So compression means, for me, finding short programs for the data or the phenomenon at hand.

Lex Fridman Podcast
#75 โ€“ Marcus Hutter: Universal Artificial Intelligence, AIXI, and AGI

You could interpret it more widely as finding simple theories, which can be mathematical theories, or maybe even informal, like just in words.

Lex Fridman Podcast
#75 โ€“ Marcus Hutter: Universal Artificial Intelligence, AIXI, and AGI

Compression means finding short descriptions, explanations, programs for the data.

Lex Fridman Podcast
#75 โ€“ Marcus Hutter: Universal Artificial Intelligence, AIXI, and AGI

Well, at least all of science I see as an endeavor of compression, not all of humanity, maybe.

Lex Fridman Podcast
#75 โ€“ Marcus Hutter: Universal Artificial Intelligence, AIXI, and AGI

And well, there are also some other aspects of science like experimental design, right?

Lex Fridman Podcast
#75 โ€“ Marcus Hutter: Universal Artificial Intelligence, AIXI, and AGI

I mean, we create experiments specifically to get extra knowledge, and that isn't part of the decision-making process.

Lex Fridman Podcast
#75 โ€“ Marcus Hutter: Universal Artificial Intelligence, AIXI, and AGI

But once we have the data, to understand the data is essentially compression.

Lex Fridman Podcast
#75 โ€“ Marcus Hutter: Universal Artificial Intelligence, AIXI, and AGI

So I don't see any difference between compression