Jan Kulveit
๐ค SpeakerAppearances Over Time
Podcast Appearances
Another distinction is humans are somewhat fixed.
You cannot easily and quickly increase or decrease their counts.
Post-AGI, this separation may stop making sense.
AIs may reproduce similarly to capital, be agents like labor, learn fast, and produce innovation like humans.
Also maybe humans may own them like normal capital, or more like slaves, or maybe AIs will be self-owned.
Better and worse ways how to reason about post-AGI situations.
There are two epistemically sound ways to deal with problems with generalizing economic assumptions.
Broaden the view, or narrow the view.
There are also many epistemically problematic moves people take.
Broadening the view means we try to incorporate all crucial considerations.
If assumptions about private property lead us to think about post-AGI governance, we follow.
If thinking about governance leads to the need to think about violence and military technology, we follow.
In the best case, we think about everything in terms of probability distributions and more or less likely effects.
This is hard, interdisciplinary, and necessary if we are interested in forecasts or policy recommendations.
Narrowing the view means focusing on some local domain, trying to make a locally valid model and clearly marking all the assumptions.
This is often locally useful, may build intuitions for some dynamic, and fine as long as a lot of effort is spent on delineating where the model may apply and where clearly not.
What may be memetically successful and can get a lot of attention, but overall is bad, is doing the second kind of analysis and presenting it as the first type.
Crucial consideration is a consideration which can flip the result.
If an analysis ignores or assumes away 10 of these, the results have basically no practical relevance.
Imagine for each crucial consideration, there is 60% chance the modal view is right and 40% it is not.