Menu
Sign In Search Podcasts Charts People & Topics Add Podcast API Blog Pricing

Ihor Kendiukhov

๐Ÿ‘ค Speaker
515 total appearances

Appearances Over Time

Podcast Appearances

LessWrong (Curated & Popular)
"On Independence Axiom" by Ihor Kendiukhov

Now, F2 must agree with F1 on one thing.

LessWrong (Curated & Popular)
"On Independence Axiom" by Ihor Kendiukhov

The ranking of certain degenerate outcomes.

LessWrong (Curated & Popular)
"On Independence Axiom" by Ihor Kendiukhov

If you prefer bundle A to bundle B with certainty, then F2, A greater than F2, B, just as F1, A greater than F1, B. But F2 contains strictly more information than F1.

LessWrong (Curated & Popular)
"On Independence Axiom" by Ihor Kendiukhov

It tells you not just that you prefer A to B, but how much you prefer A to B relative to other pairs, in the precise sense that these ratios of differences determine what gambles you would accept.

LessWrong (Curated & Popular)
"On Independence Axiom" by Ihor Kendiukhov

F1 says nothing about gambles at all.

LessWrong (Curated & Popular)
"On Independence Axiom" by Ihor Kendiukhov

This distinction is treated in the theoretical literature, see for example Mars-Colel, Winston, and Green, Microeconomic Theory, Chapter 6, which makes the distinction explicit, or CREPS, notes on the theory of choice, which provides a particularly careful treatment.

LessWrong (Curated & Popular)
"On Independence Axiom" by Ihor Kendiukhov

But in practice, in textbooks, in casual discussion, the two get conflated constantly.

LessWrong (Curated & Popular)
"On Independence Axiom" by Ihor Kendiukhov

People say utility function without specifying which one they mean, and the ambiguity does real damage.

LessWrong (Curated & Popular)
"On Independence Axiom" by Ihor Kendiukhov

Here is the specific confusion that matters for our purposes.

LessWrong (Curated & Popular)
"On Independence Axiom" by Ihor Kendiukhov

When someone says the rational agent maximizes expected utility, this sounds, to a casual listener, like it means the rational agent computes the probability-weighted average of their subjective values across all possible outcomes.

LessWrong (Curated & Popular)
"On Independence Axiom" by Ihor Kendiukhov

In other words, it sounds like the agent takes F1, the function representing how good each outcome feels or how much they value it, and averages it across possible worlds, weighted by probability.

LessWrong (Curated & Popular)
"On Independence Axiom" by Ihor Kendiukhov

This would mean that the agent literally values a gamble at the weighted sum of how much they value each possible result.

LessWrong (Curated & Popular)
"On Independence Axiom" by Ihor Kendiukhov

But this is only true if F1 and F2 are the same function.

LessWrong (Curated & Popular)
"On Independence Axiom" by Ihor Kendiukhov

And they are generally not the same function.

LessWrong (Curated & Popular)
"On Independence Axiom" by Ihor Kendiukhov

They coincide only in the special case where the agent's risk attitudes happen to perfectly match the curvature of their subjective value function, which is to say, only when the agent treats each possible world as independently valuable and sums across them with no regard for the structure of the gamble as a whole.

LessWrong (Curated & Popular)
"On Independence Axiom" by Ihor Kendiukhov

There is no reason to expect this, and empirically it does not hold.

LessWrong (Curated & Popular)
"On Independence Axiom" by Ihor Kendiukhov

Why does this matter for what follows?

LessWrong (Curated & Popular)
"On Independence Axiom" by Ihor Kendiukhov

Because before addressing serious arguments for EUT, I want to address argument 0, that EUT is good because it averages subjective utilities over possible worlds, for it doesn't.

LessWrong (Curated & Popular)
"On Independence Axiom" by Ihor Kendiukhov

Heading.

LessWrong (Curated & Popular)
"On Independence Axiom" by Ihor Kendiukhov

Independence is sufficient but not necessary for avoiding exploitation.