Jan Kulveit
๐ค SpeakerAppearances Over Time
Podcast Appearances
Assume or imply the modal view is right 10 times, and your analysis holds in 0.6% worlds.
In practice, this is usually not done explicitly, almost no one claims their analysis considers all important factors, but as a form of Mott and Bailey fallacy.
The Mott is the math in the paper, follows from the assumptions and there are many of these.
The Bailey other broad-stroke arguments, blog post summaries, tweets, and shorthand references, spreading way further without the hedging.
In the worst cases, various assumptions made are contradictory or at least anti-correlated.
For example, some economists assume comparative advantage generally preserves relevance of human labor and AIs are just a form of capital which can be bought and replicated.
However, comparative advantage depends on opportunity costs.
If you do X, you cannot do Y at the same time.
The implicit assumption is you cannot just boot a copy of you.
If you can, the opportunity cost is not something like the cost of your labor, but the cost of booting up another copy.
If you assume future AGIs are similarly efficient substitutes for human labor as current AIs are for moderately boring copywriting, the basic comparative advantage model is consistent with labor price dropping 10,000x below minimum wage.
While the comparative advantage model is still literally true, it does not have the same practical implications.
Also while in the human case the comparative advantage model is usually not destroyed by frictions, if your labor is sufficiently low value, the effective price of human labor can be zero.
For a human example, five-year-olds or people with severe mental disabilities unable to read are not actually employable in the modern economy.
In the post-AGI economy, it is easy to predict frictions like humans operating at machine speeds or not understanding the directly communicated neural representations.
What to do?
To return to the opening metaphor, economic reasoning projects high-dimensional reality into a low-dimensional model.
The hard work is choosing the projection.
Post-AGI, we face a situation where the reality we are projecting may be different enough that projections calibrated on human economies systematically fail.
The solution is usually to step back and bring more variables into the model.