David Kipping
๐ค SpeakerAppearances Over Time
Podcast Appearances
And so then you apply this technique called Bayesian model averaging, which is where you propagate the uncertainty of your two models to get a final estimate.
And because of that one base reality that lives in the simulated scenario,
you end up counting this up and getting that it always has to be less than 50%.
So the probability of living in a simulated reality versus a based reality has to be slightly less than 50%.
Now, that really comes down to that statement of giving it 50-50 odds to begin with.
And on the one hand, you might say, look, David, I work in artificial intelligence, I'm very confident that this is going to happen, just of extrapolating of current trends.
Or on the other hand, a statistician would say, you're giving way too much weight to the simulation hypothesis because it's an intrinsically highly complicated model.
You have a whole hierarchy of realities within realities within realities.
It's like the inception-style thing, right?
And so this requires hundreds, thousands, millions of parameterizations to describe.
And by Occam's razor, we would always normally penalize inherently complicated models as being disfavored.
So I think you could argue I'm being too generous or too kind with that, but I sort of want to develop the rigorous mathematical tools to explore it.
And ultimately, it's up to you to decide what you think that 50-50 odds should be.
But you can use my formula to plug in whatever you want and get the answer.
And I use 50-50.
Yeah.
The simulation hypothesis has all sorts of implications like that.
As Sean Carroll pointed out, there's a really interesting contradiction apparently with the simulation hypothesis that I speak about a little bit in the paper.
He pointed out that in this hierarchy of realities, which then develop their own AIs within the realities or
really ancestor simulations, I should say, rather than AI.