Ihor Kendiukhov
๐ค SpeakerAppearances Over Time
Podcast Appearances
Now, F2 must agree with F1 on one thing.
The ranking of certain degenerate outcomes.
If you prefer bundle A to bundle B with certainty, then F2, A greater than F2, B, just as F1, A greater than F1, B. But F2 contains strictly more information than F1.
It tells you not just that you prefer A to B, but how much you prefer A to B relative to other pairs, in the precise sense that these ratios of differences determine what gambles you would accept.
F1 says nothing about gambles at all.
This distinction is treated in the theoretical literature, see for example Mars-Colel, Winston, and Green, Microeconomic Theory, Chapter 6, which makes the distinction explicit, or CREPS, notes on the theory of choice, which provides a particularly careful treatment.
But in practice, in textbooks, in casual discussion, the two get conflated constantly.
People say utility function without specifying which one they mean, and the ambiguity does real damage.
Here is the specific confusion that matters for our purposes.
When someone says the rational agent maximizes expected utility, this sounds, to a casual listener, like it means the rational agent computes the probability-weighted average of their subjective values across all possible outcomes.
In other words, it sounds like the agent takes F1, the function representing how good each outcome feels or how much they value it, and averages it across possible worlds, weighted by probability.
This would mean that the agent literally values a gamble at the weighted sum of how much they value each possible result.
But this is only true if F1 and F2 are the same function.
And they are generally not the same function.
They coincide only in the special case where the agent's risk attitudes happen to perfectly match the curvature of their subjective value function, which is to say, only when the agent treats each possible world as independently valuable and sums across them with no regard for the structure of the gamble as a whole.
There is no reason to expect this, and empirically it does not hold.
Why does this matter for what follows?
Because before addressing serious arguments for EUT, I want to address argument 0, that EUT is good because it averages subjective utilities over possible worlds, for it doesn't.
Heading.
Independence is sufficient but not necessary for avoiding exploitation.