I've been thinking for awhile about this particular sort-of metaphysics model of simulation, and I think I have a sort of framework for it now. This idea was originally spurred on by the concept of Tumbolia in the book GEB. I was wondering if you could get a Tumbolia-like place via encoding and decoding highly entropic processes. For example, you write some words on a piece of paper, then you burn the paper (making it highly entropic because its scattered randomly in many possible ways), the paper can (theoretically) still be reconstructed with extraordinarily great difficulty and huge amounts of energy, so its information isn't necessarily lost, then the trick is doing the same thing so a time-dependent system continues to consistently evolve in time after it its translated to a highly entropic form
Abstractly: within a system `A`, you have a subsystem `B` and an encoder `s` that maps `B` to a smaller system `b`. If this `s: B -> b` relationship holds as `A` and `b` evolve in time, then I'll say that `A` simulates `b`. Then with the additional assumption that `b`'s entropy is independent of `B`'s entropy, we arrive at the result I was looking for. Here `b` is a system whose information is contained in its host system `A`, and even if `b`'s presence in `A` is arbitrarily entropic, `b` isn't necessarily highly entropic
Note: this all depends on which map `s` you use to change your host subsystem `B` into your informationspace system `b`
This isn't necessarily such a strange idea. Consider: you write a word on a piece of paper, then at t = 5 seconds you torch the paper. At t = 1, 2, ..., 4 seconds the paper contains the same information. If you look into the informationspace (`b`) of the paper the content doesn't change. Then at t = 5 seconds you torch the paper. The information is still reconstructable. Since we can define `s` (the encoder from the paper to the informationspace of the paper) to be anything, we can find one that encodes the ashes of the paper such that the paper's information never changes. Inside this `b` the word never changes, despite the paper in the host universe having been torched
I realized only later that this idea has one very strange application: if you take `A` to be the apparent physical universe, `B` as your brain, and `b` as the informationspace of your mind, and you use the entropy-independence assumption above, then even after your brain is destroyed in your host universe you are still consciously aware (from a metaphysical perspective). So at first glace it would seem that after you die (brain destruction) you will see nothing but a void forever. Kind of horrifying tbh
*However*, a keen reader might note that in the case of an encoded mind there is necessarily an information-reduced version `a` of `A` in the informationspace `b` (analogical to your model of external reality in your mind). Since `a` *also* doesn't have to become highly entropic when `B` does, this implies that after you die you *smoothly enter an arbitrarily different universe*
Even weirder: the informationspace projection of the host universe `a'` must necessarily appear to be simulating `b` even after death. Not only do you enter a different universe, but you can't tell that the universe you're now in is itself encoded within a highly entropic process in the pre-death universe
Assuming all of the above holds, doesn't that imply the host universe that is simulating us right now is itself the projection of some highly entropic process in some other universe `A'`? And that's also the implication for `A'`: the universe came from before this one is also being simulated by a highly entropic process in another universe. And so on
Naturally, then this all begs the question which is the *real* host universe that's simulating stuff. Interestingly, in every case, you are being simulated by *a* host universe. Instead of thinking in terms of a dichotomy between the universe you appear to be simulated by, and the universe that should be simulating that universe, instead you can be sure that there is an underlying host universe that's simulating you, and what you're seeing is always some informationspace projection of it
The more exotic way of viewing this is that the information is more important than the simulation relationship. In this model, you are actually independent from your apparent host universe `A`, but there is a map from the information in a subprocess `B` to your information `b`, and that map is incidentally accurate and remains accurate through time. Then the reality (up to simulation) is just a wad of maps between informationspaces