The theory, which I probably misunderstand because I have a similar level of education to a macaque, states that because a simulated world would eventually develop to the point where it creates its own simulations, it’s then just a matter of probability that we are in a simulation. That is, if there’s one real world, and a zillion simulated ones, it’s more likely that we’re in a simulated world. That’s probably an oversimplification, but it’s the gist I got from listening to people talk about the theory.
But if the real world sets up a simulated world which more or less perfectly simulates itself, the processing required to create a mirror sim-within-a-sim would need at least twice that much power/resources, no? How could the infinitely recursive simulations even begin to be set up unless more and more hardware is constantly being added by the real meat people to its initial simulation? It would be like that cartoon (or was it a silent movie?) of a guy laying down train track struts while sitting on the cowcatcher of a moving train. Except in this case the train would be moving at close to the speed of light.
Doesn’t this fact alone disprove the entire hypothesis? If I set up a 1:1 simulation of our universe, then just sit back and watch, any attempts by my simulant people to create something that would exhaust all of my hardware would just… not work? Blue screen? Crash the system? Crunching the numbers of a 1:1 sim within a 1:1 sim would not be physically possible for a processor that can just about handle the first simulation. The simulation’s own simulated processors would still need to have their processing done by Meat World, you’re essentially just passing the CPU-buck backwards like it’s a rugby ball until it lands in the lap of the real world.
And this is just if the simulated people create ONE simulation. If 10 people in that one world decide to set up similar simulations simultaneously, the hardware for the entire sim reality would be toast overnight.
What am I not getting about this?
Cheers!
The assumption is that the simulation runs constantly and at least as fast as real time.
Neither needs to be true. A simulation might be to see what would have happened if we made different choices, it might be a video game, it might be a way to gen TV shows based on “the historical past” that we consider present time.
We might just be an experiment to see if free will exists. Start 10,000 identical simulations to run a century, and at the end compare the results, see what’s changed, and if those changes snowballed or evened out.
And just like how video games only “draw” what’s in field of view, a simulation could run the same way, drastically cutting down resource needs.
And “impossible levels of energy” isn’t really right. At a certain point a species can get a Dyson sphere. And once they get the first, every subsequent one is a cake walk. It’s as close as possible to “infinite energy” there’s no real reason to even go past one.
Hell, it doesn’t need to be “everything” everything. Generate a solar system and as long as no one leaves, you don’t need to generate anything past it other than some lights.