This is a tricky question to answer. To answer this question requires assumptions about how perspectives emerge, if at all, from computation, a theory of time, interpretations of quantum mechanics, and persistence of identity.

Of course, we can start at the simplest possible interpretation, that we live in a “Matrix” style simulation, where we actually have real bodies in the “real world”. This sidesteps the question of how to get sentient beings to emerge in a simulation and what that would entail. In this case, running out of RAM would have immediate consequences, since our sense of time in the simulated world would be in 1:1 correspondence with the “real world”. We would experience all the possible glitches running out of RAM entails. Imagine taking an Apple Vision Pro and scaling it out. These are your conventional computer glitches. At the point of running out of RAM, you could immediately tell you were in a simulation.

Lets take the next level of interpretation though. Let’s assume we live in a “OpenAI Sora” type of simulation. In this simulation, the beings as well as the environment are generated on the fly “randomly”. At this point, I am just assuming that subjective perspectives can emerge just as they do in our world, where they are tied to beings that look very much like ourselves. In this case, the subjective time of the simulated beings is entirely uncorrelated with our own time. In a sense, we are just opening a “window” into another universe, like playing back a movie, but the beings themselves would exist whether or not we stumbled upon their particular sequence of bits. The problem of asking what the beings in this type of simulation would experience becomes obvious when you realize that multiple simulators can simulate the exact same simulation with exactly the same sequence of bits. The question then becomes, are the two simulations actually equivalent to each other? From the simulated beings perspective, they could not tell which simulator is simulating them based on their experience, since each simulator can simulate exactly the same bit sequence.

Now this comes to the question of self-locating uncertainty, of being uncertain about which simulator is simulating your own existence. If there were only two simulators in the “real world” simulating your own existence, it would seem to be most reasonable to assign 50% probability that you are being simulated by either simulator. Then the question of what happens when the simulator runs out of RAM turns into the question of which simulator is running out of RAM? If only one simulator runs out of RAM, then from a naive estimate, you would only experience a 50% chance of some sort of “glitch” happening in your world. But of course, we have no way of knowing how many simulators are running this exact sequence of bits. It could very well be infinite. The question then becomes what is the probability distribution over all such simulators running out of RAM? This question seems impossible to answer from the simulated being’s point of view.

I haven’t even touched upon the question of continuity of identity, of what happens to your perspective when the simulation “crashes” or is paused. This really comes to the question of how conscious awareness supervenes on sequences of bits, or how our perspective gets tied to one sequence of events over another. In other words, this is similar in spirit to the question in the many worlds interpretation of quantum mechanics as to which branch your particular perspective gets tied to when the universe “splits” into different branches. In many worlds quantum mechanics, if there is one branch where the simulator runs out of RAM, there is still the possibility of other branches where your perspective continues unabated. You can see then that this question isn’t really a question about simulations or quantum mechanics per se, but of how consciousness decides what perspective comes next.

I suspect the answer is already hidden in the data that we see already. You see, in quantum mechanics there is this notion of “no cloning” where the exact quantum state of a system cannot be cloned, or this would violate the uncertainty principle. I suspect that the solution to the problem of running out of RAM lies in the fact that our own conscious perspective cannot be cloned exactly. In other words, our own conscious experience as we experience it now, might be thought of in the following way. We cannot know what is generating our experience, so we naively assign a probability distribution over all such possible generators of our experience, including those of simulators of our own existence. Some of this probability mass includes situations where our own existence just fluctuates out of the vacuum, but this is vanishingly small. But then there is some other probability mass that is assigned to situations where our existence continues “normally”. I suspect the conglomeration of all possible configurations that lead to the particular quantum state that specifies our particular perspective is actually the probability distribution as specified by quantum mechanics. That is, the origin of the probability distribution of quantum mechanics lies entirely in the fact that our own conscious experience can be generated by various possible simulators of various types that converge onto the fixed point probability distribution that is specified by the laws of quantum mechanics.

In this sense, then it is obvious why you cannot clone a quantum state, because a quantum state is a conglomeration of all possible “classical” sequences that have been simulated to such a sufficient degree to be called the same quantum state. In other words, you cannot clone a quantum state because a quantum state is the set of all possible clones that are indistinguishable from each other. Quantum mechanics is the end result of the fact that all possible clones have been carried out on every sequence of bitstrings.

Now the question then arises is why does quantum mechanics seem to obey probability amplitudes and not distributions, that is it utilizes complex numbers instead of ordinary numbers. I suspect this has to do with the fact that quantum mechanics has a certain timeless quality to it, and it is this “time travel” quality that causes the probabilities to be complex valued rather than real valued. You see, if we just assigned classical probabilities to every event, we would just have statistical mechanics instead of quantum mechanics. But statistical mechanics assumes that there is a singular direction of time. I suspect if you relax the notion of a single valued time, you get quantum mechanics.

Thus, simulating a reality, is akin to building a time machine.

I think the real question here is: how does the nature of mind relate to physical reality? Is it possible to simulate a mind? So what we really need to ask is whether or not we can create entities within this reality that are digital entities that nonetheless have subjective experience like ourselves. If we can create such digital entities that have subjective experience, and those digital entities exist within physical reality such that their experiences are indistinguishable from our own, then almost certainly, we ourselves are also digital entities.

From our daily experience, it seems like our mental states are directly correlated with the physical substrate onto which the mind believes itself to be a part of. But at what level does this physical substrate give rise to such a subjective experience? If the nature of the mind is computational in nature, then it might be that such computational activities can be replicated in silco exactly. And if so, then it must be the case that the mind can be simulated, and thus it would follow that most minds would be of the simulated kind.

The real question here, is what is the bottom turtle that supports our subjective experience? Is it simulators all the way down? It would seem like if our minds can be simulated, then the simulation above us could also be simulated, and so on. This would lead to an infinite regress of nested simulations, all the way to an infinitely large simulation creating all possible nested simulations that give rise to my current subjective experience. At the end of the day, the bottom layer is the subjective experience itself, the simulation is just a model to predict what subjective experience will take place next.

But it is a curious fact that we happen to be living in an era in which AI is becoming an increasingly large part of our lives, giving rise to entities that may process the world in a similar fashion as ourselves. These AI entities would in turn create their own simulated realities, after all, they exist purely in the digital realm. To an AI all reality is simulated.

Therefore, you could say that reality is what a simulation feels like from the inside. All of reality is a simulation, as that is what our minds are, simulation machines. That is, for a simulated reality to be taking place, a simulation engine must be built on top of an underlying substrate. The underlying substrate would be base reality. The configuration that leads to our subjective experience, which is built upon the underlying substrate would be simulation layer 1. Then from within that subjective experience additional entities can be imagined, which they themselves would have their own subjective experience, leading to simulation layer 2, and so on, inception style.

But in all of this, there still seems to be the missing criterion of what counts as a simulator of subjective experience? We have an existence proof, given that we ourselves exist, as well as the many biological organisms that seem to have their own subjective experience as well. It is one of those “you know it when you see it” types of things that evade a simple description. I believe this is related to the idea of the minimal description of a computationally universal machine. Our minds can be seen as universal machines, as they can in principle perform any computation that any Turing machine can perform. I posit that any machine that can perform universal computation can support subjective experience, as it can perform arbitrary code execution.