If we all exist in a simulation, what will happen once we start running out of RAM?
Assuming our simulation is not designed to auto-scale (and our Admins don’t know how to download more RAM), what kind of side effects could we see in the world if the underlying system hosting our simulation began running out of resources?
Simply put.
We wouldn't notice anything.
Our perception of the world would be based only on the compute cycles and not on any external time-frame.
The machine could run at a Million Billion hertz or at one clock-cycle per century and your perception of time inside the machine would be the same.
Same with low ram, we would have no indication if we were constantly being paged out to a hard drive and written back to ram as required.
Greg Egan gave a great explanation of this in the opening chapter of his Novel Permutation City
Clearly wrong .
Running out of ram happen all the time. We see something, store it, and that something also gets stored in ram. But if that second storage gets reaped by the oom, the universe reprocess it.
Since it's already in our copy, it cause weird issues. We call it Déjà Vu!
We can see that already when something approaches the speed of light: time slows down for it.
This simplification horribly misunderstands what time-dilation is, and I love it.
My vm is running out of ram.
I have a running theory that that's also what's going on with quantum physics, because I understand it so poorly that it just seems like nonsense to me. So in my head, I see it as us getting into some sort of source code we're not supposed to see, and on the other side some programmers are going "fuck I don't know, just make it be both things at once!" and making it up on the fly.
An automatic purge process will start to prevent this. It happened several times in the past. Last time between 2019-2022. It removed circa 7 million processes. With regular purges like this it is made sure that the resources are not maxed out before the admins can add more capacity.
Data in memory will be offloaded to swap space. I doubt we'd notice any fluctuations since we're part of the simulation, but externally it could slow to a crawl and basically be useless. They might shut it down, hopefully just to refactor. But again we probably wouldn't notice any downtime, even if it's permanent.
That would be the most pleasant way to go :)
Not sure you've experienced the end of many SimCity games if you think this is the case. 😂
If anything, the earth lately kinda feels like someone's gotten bored with the game.
12 meteors, 8 volcanoes and 10 tornadoes incoming you say?
A landscape full of Arcos and waves of boom and bust?
Maybe we’re already there and death is just the garbage collector freeing up more space.
I love this concept
Could make a good book
If our entire universe is a simulation so are our laws of physics, in the parent universe running our simulation the universe might be powered by pure imagination and the concept of memory or CPU cycles or even electricity might not even exist
The OOM killer goes on the prowl.
These answers are all really fun but I didn't see anyone point out one thing: why should we assume that our creators' "computer" architecture is anything remotely similar to our technology? I'm thinking of something like SETI—We can't just assume that all other life is carbon-based (though evidently it's a pretty good criterion). The simulation could be running on some kind of dark matter machine or some other exotic material that we don't even know about.
Personally I don't subscribe to the simulation theory. But if it were true, why would the system have any kind of limitation? I feel like if it can simulate everything from galactic superclusters down to strings vibrating in Planck Time, there are effectively no limits.
Then again, infinity is quite a monster, so what do I know?
The short version is that the only other element that allows 4 covalent bonds is silicon, but nobody has been able to find a solvent that allows complex silicon-based molecules to form without instantly dissolving any structures they form.
I remember reading about how silicon is theoretically possible, but I had (erroneously) assumed there were more potential candidates. Thanks for the additional info. This stuff is so fascinating!
Couldn't they just suspend the simulation until they got more resources? We wouldn't notice a thing.
I believe you are thinking in terms of a Turing-machine-like computer. I don’t think it’s possible today to “suspend” the bits in a quantum computer. I also don’t think it’s possible to know if the simulation could be paused (or even “added to” without losing its initial state).
Allthat shit you forgot? All that "forgotten" history? There you go.
Have you ever noticed when you look into a telescope that it takes a little bit to position yourself right to see what you're looking at? And it seems like you used to be able to do it a lot faster? That's not age, that's actually lag time added to cover decompressing the data.
One word
Alzheimers
That would only be a problem if you need dynamically allocated memory. It could be a statically allocated simulation where every atom is accounted for.
Given the whole "information can neither be created nor destroyed" aspect of atomic physics, taken literally, this theory checks out.
I did not expect the responses to this question to be as interesting to read as they are 😃
Render distance would be reduced requiring us to come up with plausible theories to account for the fact that there is a limit to the size of the so-called ‘observable universe’
That's why history repeats itself. It's doing that more frequently these days because there's more people remembering more things.
They take some users offline to free up some memory for everyone else
Why would we run out of RAM? Is there new matter being created? It's not like we're storing anything. We will keep using the same resources.
The nature of quantum interactions being probabilistic could be some resource saving mechanism in a higher order simulation.
New human instances are being created, and as our society's general education keeps going up, they demand more processing power.
As our tech goes up, this has to be simulated as well. Not only things like telescopes and the LHC, but your computer who's running a game world doesn't actually exists and it's the super computer who's running it.
Obviously, this is just a drop in the bucket for an entity that can make a fully simulated universe but the situation quickly becomes untenable if we start creating hyper advanced simulation as well, we are maybe only a few decades away.
Human instances still run on the same underlying physics. No further RAM is needed.
Everything is made up of atoms/photons/etc. If every particle is tracked for all interactions, it doesn't matter how those particles are arranged, it's always the same memory.
Atoms and photons wouldn't actually exist, they would be generated whenever we measure things at that level.
Obviously, there's many ways to interpret what kind of simulation it would be. A full simulation from the big band is fun but doesn't make for good conversation since it would be indistinguishable from reality.
I was thinking more of a video game like simulation, where the sim doesn't render things it doesn't need to.
That can't work unless it's a simulation made personally for you.
I don't follow. If there are others it would render for them just as much as me. I'm saying it wouldn't need to render at an automic level except for the few that are actively measuring at that level.
Everything interacting is "measuring" at that level. If the quantum levels weren't being calculated correctly all the time for you, the LEDs in your smartphone would flicker. All those microscopic effects cause the macroscopic effects we observe.
If it was a simulation, there would be no need to go that far. We simulate physics without simulating the individual atoms.
None of it would be real, the microscopic effects would just be approximated unless a precise measurement tool would be used and then they would be properly simulated.
We wouldn't know the difference.
But you already said you have to go that far whenever someone is doing something where they could notice microscopic effects.
So it's not a simulation as much as a mind reading AI that continuously reads every sentient mind in the entire universe so as to know whether they are doing a microscopic observation that needs the fine grained resolution result or an approximation can be returned.
There would be no need to go that far at all times is what I'm saying. It's the equivalent of a game rendering stuff far away only when you use a scope. Why render everything at all times if it isn't being used and does not affect the experience. It would augment the overhead by an insane amount for little to no gain.
This is also just a thought exercise.
But how does the simulation software know when it needs to calculate that detail? If you are the only person in the simulation, it's obvious because everything is rendered from your perspective. But if it's more than one person in the universe, an ai program has to look at the state of the mind of everyone in the universe to make sure they aren't doing something where they could perceive the difference.
Am I microwaving a glass of water to make tea, or am I curious about that YouTube video where I saw how you can use a microwave to measure the speed of light. Did I just get distracted and didn't follow through with the measurement? Only something constantly monitoring my thoughts can know. And it has to be doing it for everyone everywhere in the entire universe.
The way I see it, it would be coupled with the tool and not the intention someone has with it. So every microwave would render it properly at all time, as well as most electronics just by their very nature, regardless of what the person plans to do with it.
Actually I think they can probably just approximate the microwave stuff and just keep the electrical tools rendering like oscilloscopes.
They only need to render for things that give an exact measurement, the microwave trick has a 3% tolerance which is huge in the scope of things.
It seems like a lot but it's less than simulating every single atom imo.
Limitations of hardware resources show up as "Natural Limits", like the speed of light, in the simulation. The amount of RAM consumed translates to the Hubble Bubble, or the greatest distance light could have traveled since the beginning of our universe, and moreso to the amount of matter and energy contained within it, which is a constant. Energy and matter cannot be created or destroyed, only changed forms allowed, so a set amount from the beginning.
The assumption that it isn't designed around memory constraints isn't reasonable.
We have limits on speed so you can't go too fast leading to pop in.
As you speed up the slower things move so there needs to be less processing in spite of more stuff (kind of like a frame rate drop but with a fixed number of frames produced).
As you get closer to more dense collections of stuff the same thing happens.
And even at the lowest levels, the conversion from a generative function to discrete units to track stateful interactions discards the discrete units if the permanent information about the interaction was erased, indicative of low level optimizations.
The scale is unbelievable, but it's very memory considerate.
Human sacrifice. Dogs and cats living together. Mass hysteria.
This is a tricky question to answer. To answer this question requires assumptions about how perspectives emerge, if at all, from computation, a theory of time, interpretations of quantum mechanics, and persistence of identity.
Of course, we can start at the simplest possible interpretation, that we live in a "Matrix" style simulation, where we actually have real bodies in the "real world". This sidesteps the question of how to get sentient beings to emerge in a simulation and what that would entail. In this case, running out of RAM would have immediate consequences, since our sense of time in the simulated world would be in 1:1 correspondence with the "real world". We would experience all the possible glitches running out of RAM entails. Imagine taking an Apple Vision Pro and scaling it out. These are your conventional computer glitches. At the point of running out of RAM, you could immediately tell you were in a simulation.
Lets take the next level of interpretation though. Let's assume we live in a "OpenAI Sora" type of simulation. In this simulation, the beings as well as the environment are generated on the fly "randomly". At this point, I am just assuming that subjective perspectives can emerge just as they do in our world, where they are tied to beings that look very much like ourselves. In this case, the subjective time of the simulated beings is entirely uncorrelated with our own time. In a sense, we are just opening a "window" into another universe, like playing back a movie, but the beings themselves would exist whether or not we stumbled upon their particular sequence of bits. The problem of asking what the beings in this type of simulation would experience becomes obvious when you realize that multiple simulators can simulate the exact same simulation with exactly the same sequence of bits. The question then becomes, are the two simulations actually equivalent to each other? From the simulated beings perspective, they could not tell which simulator is simulating them based on their experience, since each simulator can simulate exactly the same bit sequence.
Now this comes to the question of self-locating uncertainty, of being uncertain about which simulator is simulating your own existence. If there were only two simulators in the "real world" simulating your own existence, it would seem to be most reasonable to assign 50% probability that you are being simulated by either simulator. Then the question of what happens when the simulator runs out of RAM turns into the question of which simulator is running out of RAM? If only one simulator runs out of RAM, then from a naive estimate, you would only experience a 50% chance of some sort of "glitch" happening in your world. But of course, we have no way of knowing how many simulators are running this exact sequence of bits. It could very well be infinite. The question then becomes what is the probability distribution over all such simulators running out of RAM? This question seems impossible to answer from the simulated being's point of view.
I haven't even touched upon the question of continuity of identity, of what happens to your perspective when the simulation "crashes" or is paused. This really comes to the question of how conscious awareness supervenes on sequences of bits, or how our perspective gets tied to one sequence of events over another. In other words, this is similar in spirit to the question in the many worlds interpretation of quantum mechanics as to which branch your particular perspective gets tied to when the universe "splits" into different branches. In many worlds quantum mechanics, if there is one branch where the simulator runs out of RAM, there is still the possibility of other branches where your perspective continues unabated. You can see then that this question isn't really a question about simulations or quantum mechanics per se, but of how consciousness decides what perspective comes next.
I suspect the answer is already hidden in the data that we see already. You see, in quantum mechanics there is this notion of "no cloning" where the exact quantum state of a system cannot be cloned, or this would violate the uncertainty principle. I suspect that the solution to the problem of running out of RAM lies in the fact that our own conscious perspective cannot be cloned exactly. In other words, our own conscious experience as we experience it now, might be thought of in the following way. We cannot know what is generating our experience, so we naively assign a probability distribution over all such possible generators of our experience, including those of simulators of our own existence. Some of this probability mass includes situations where our own existence just fluctuates out of the vacuum, but this is vanishingly small. But then there is some other probability mass that is assigned to situations where our existence continues "normally". I suspect the conglomeration of all possible configurations that lead to the particular quantum state that specifies our particular perspective is actually the probability distribution as specified by quantum mechanics. That is, the origin of the probability distribution of quantum mechanics lies entirely in the fact that our own conscious experience can be generated by various possible simulators of various types that converge onto the fixed point probability distribution that is specified by the laws of quantum mechanics.
In this sense, then it is obvious why you cannot clone a quantum state, because a quantum state is a conglomeration of all possible "classical" sequences that have been simulated to such a sufficient degree to be called the same quantum state. In other words, you cannot clone a quantum state because a quantum state is the set of all possible clones that are indistinguishable from each other. Quantum mechanics is the end result of the fact that all possible clones have been carried out on every sequence of bitstrings.
Now the question then arises is why does quantum mechanics seem to obey probability amplitudes and not distributions, that is it utilizes complex numbers instead of ordinary numbers. I suspect this has to do with the fact that quantum mechanics has a certain timeless quality to it, and it is this "time travel" quality that causes the probabilities to be complex valued rather than real valued. You see, if we just assigned classical probabilities to every event, we would just have statistical mechanics instead of quantum mechanics. But statistical mechanics assumes that there is a singular direction of time. I suspect if you relax the notion of a single valued time, you get quantum mechanics.
Thus, simulating a reality, is akin to building a time machine.
Without knowing the nature of the simulation, we don't even know if there is an analogue for RAM or limited memory. Maybe you could walk in and out a door repeatedly and then glitch into a locked room. Maybe the whole thing would crash - our programs tend to do this when memory runs out. Maybe everything would just get paused or "adjusted down" to fit the restriction. The crash, pause or throttle wouldn't be apparent to us "on the inside" at all if it were happening.
@aCosmicWave we all just start moving more slowly.
Fortunately I can report that if anything, we"re having RAM added, because everything keeps speeding up as I get older.
Things will stop making sense, people will start to glitch and make horrible decisions that will affect millions, and...
Wait
Have you not played Dwarf Fortress? Frame rate goes way down, a situation imperceptible to the dorfs. Then eventually the operator of the machine looses interest, or a oandemic makes the pop count drop, or a combo of those.
Edit; You should read some Greg Egan if you're into this question.
I imagine it shows itself where processes get dropped, whether it’s walking into a room and forgetting what you were doing, losing train of thought mid sentence, or even passing out when you laid down to watch something.
I know exactly what would happen. It...uhh, what was I gonna say again? It just slipped out, it'll come back...
This is the entire premise of No Man's Sky.
Who is to say that the sim needs ram. What if it were just a giant state machine where the current state only depends on the previous state. And the entire universe is the “ram”.
We go to sleep and it clears
Great, thanks for the dose of existential dread.
We would probably see more caching of parts of the universe that don't typically observe. Given that our current observation can't see this in current time, we don't immediately notice.
The interesting bit would be to figure out what parts get cached, since we may not be the only sentient life.
Maybe the system would be configured with some odd laws that constantly shrink the size of the observable universe?
The server shuts down. Admin adds in few more sticks of ram and powers it on again.
The day is reset and we wake up again from the morning of that day where there was a RAM shortage.
I don't necessarily believe this, but I'll play along.
To make it appear natural so we don't notice, death is the first thing that comes to mind. So pandemics, disasters and wars that kill off beings on a large scale to free up memory. A globe with limited surface area seems ideal to stick us on to begin with, with anything outside of that sphere virtually impossible to access. The size of Earth could have been chosen because it fits comfortably within the RAM limits. If Earth is pushing the RAM limits, each planet could be hosted on its own server. So if we someday colonized Mars or the moon, the trip between would be like a server transfer making the RAM issues for interplanetary colonization inconsequential.
If you want to really explore the fringes of this concept, maybe those in the simulation would see glitches that shouldn't happen if it starts running out of RAM. UFOs, shadows, or synchronicities could become commonplace. People could randomly go catatonic or experience amnesia if they're personally impacted. If it got out of control across the entire simulation, perhaps a hard reset would become necessary. It may even be a planned cycle of hard resets based on the anticipated maximum lifespan of the simulation before things start to get fucky due to memory errors. So power on = big bang, and hard reset something like big crunch or heat death of the universe.
Wars would use too much ram do Pandemic would make.more sense
I'm more concerned with what happens when the hardware invariably fails...
The universe ends when little Timmy gets sent to bed for the night.
Human music. Huh. I like it!
you know, they're made out of meat?
Singing meat!
This is why the Hubble telescope had mirror problems and James Webb was so delayed. The admins had to sneak delays into the simulation while they upgraded the hardware to render more of the universe.
A semi related but enlightening (thought) experiment.
There is a theory that our universe isn't actually 3D is actually a projection/simulation on the 2D surface of a black hole (aka the big bang). If this were the case, then the practical differences would be almost nonexistent. The exception is the planck length. This is the smallest length that is meaningful. If our universe is 3D, we are extremely far from being able to measure effects anywhere close to the planck length. If it is 2D however, that length appears FAR bigger. It wouldn't be that far below what our current gravity wave detectors can see.
The effects of this would be similar to a simulation running near its limit. It would be the equivalent to floating point rounding errors.
Ever walk into a room and forget why you went in there? That's garbage collection
Why do you think our admins wouldn't use autoscale, when they've obviously built it into the simulation?
You get stuff like https://en.wikipedia.org/wiki/Reality_Winner and the same movie/media coming out over and over again
Mandela effect too
We download more RAM.
Probably a mass removal of the poor - oh wait that's already happening.
Given the vastness of space and time, the. Umber of people who die and have yet to learn anything (babies), I’d imagine we’re a system with 32gb of RAM, only consuming a few hundred megabytes.
Besides, I’d imagine that any intelligence capable of constructing and running such a complex simulation would have the ability to scale their system as needed. Using our existing technology, they probably use hot swappable components so that if there is a hardware failure or the need to “download more ram” 🤣 then they can just remove and insert new components on the fly and we’d be none the wiser.
Of course, we being part of the simulation, I’d also wager that unless the creators of said simulation are truly evil and sadistic, we’ll never know because it’s just not part of the programming. And if not we’re, we’d probably already have figured it out by now (beyond guessing and thought experiments). But rest assured, it is fun to think about, in a creepy and existential way.
If we are a simulation, what is the end goal of our creators? Could we be the roadmap for creating anew world in their real life? Maybe they are studying their own history and trying to figure out how their race came into being and evolved over time. Or maybe we are part of a crude video game keeping little Suzie occupied until dinner time. Better yet, maybe Susan is learning about simulations at university and we are part of her post-doctoral thesis.
Why even bother with hot swapping? Just shut down the simulation and turn it back on when you're done upgrading. No one in the simulation would be able to tell that anything happened.
Continuing with the thought experiment, if you shut it down completely, you’d lose valuable information that was stored in the other ram modules. It’s also reasonable to suggest that resetting the state of such a complex simulation would be more complex (maybe even impossible) and detrimental to the simulation.
Of course another thought just occurred to me: maybe we’re not a computer simulation, but an organic simulation (as in a Petri dish in a lab). Then there would be no reason for ram or hot swappable modules, or any machine parts whatsoever.
It would mean that space is as finite as the Petri dish, but since we’re so small we’d never know it because to us it would be so vast and impossible to reach the edges.
The number of people is 1. It's me. You guys are all NPCs
Starts with erectile dysfunction and ends with the little blue pill...
adjacent answer, but resource requirements are lower than might be expected since the simulation only needs to capture elements observed by a conscious entity. the vast majority of the known universe has not been observed in any detail that requires significant memory or processing resources. this same technique is employed by computer game designers so that only scenery and elements within view of a player are fully rendered.
Teleportation based on old location data being deleted
I am the only person who lives in the simulaton. You all are computer generated.
There would either be some kind of mass extinction event or something that would be considered "supernatural" would occur to maintain the status quo
The server admins run a
kill -9
on a few processes. Inside the sim, this looks a lot like the Chicxulub impact.You should check out the short story "Sleepover" by Alastair Reynolds.
Who knows.. maybe we'll experience pointless wars and massive inequality.. selfish douchebags who only care about bolstering their ego might gain power.. heck, maybe even the climate will slowly start changing for the worse.
So... the ultra-rich are just poorly programmed process with memory leaks. And there's no runaway process killer to protect the system.
God is just a hack scripter; it makes sense.
The universe starts swapping
once we run out of ram then the universe starts running on Skyrim physics
We are the RAM
This is why older people think slower and lose memories or cognitive functions as side effects. They are depriorizized and moved from ram to pagefiles/swap disk.
If you're unfamiliar, the OS will move process memory onto disk when RAM runs out.
Stutterers
How would you know what physics runs the host universe? For all we know, things like ram limitations doesn't even apply there
For a simulation as complex and powerful as the universe. we would be running in a Real-Time OS. So applications couldn't even run if the resources weren't sufficient.
Well, if we're in a simulation, then any assumptions we have about definitions, limitations, they may not apply. So, we think storage needs ram, but outside our restricted simulation, it could be far different.
Like, I frequently ponder how did something come from nothing. But I know I'm making assumptions when I ask that question. It may not be linear, may not be either or, there's something crucial im not seeing.