The theory that we live in a simulation involves simulants running their own simulations; wouldn't that require impossibly more resources for the main sim?

58008@lemmy.world to No Stupid Questions@lemmy.world – 132 points –

The theory, which I probably misunderstand because I have a similar level of education to a macaque, states that because a simulated world would eventually develop to the point where it creates its own simulations, it's then just a matter of probability that we are in a simulation. That is, if there's one real world, and a zillion simulated ones, it's more likely that we're in a simulated world. That's probably an oversimplification, but it's the gist I got from listening to people talk about the theory.

But if the real world sets up a simulated world which more or less perfectly simulates itself, the processing required to create a mirror sim-within-a-sim would need at least twice that much power/resources, no? How could the infinitely recursive simulations even begin to be set up unless more and more hardware is constantly being added by the real meat people to its initial simulation? It would be like that cartoon (or was it a silent movie?) of a guy laying down train track struts while sitting on the cowcatcher of a moving train. Except in this case the train would be moving at close to the speed of light.

Doesn't this fact alone disprove the entire hypothesis? If I set up a 1:1 simulation of our universe, then just sit back and watch, any attempts by my simulant people to create something that would exhaust all of my hardware would just... not work? Blue screen? Crash the system? Crunching the numbers of a 1:1 sim within a 1:1 sim would not be physically possible for a processor that can just about handle the first simulation. The simulation's own simulated processors would still need to have their processing done by Meat World, you're essentially just passing the CPU-buck backwards like it's a rugby ball until it lands in the lap of the real world.

And this is just if the simulated people create ONE simulation. If 10 people in that one world decide to set up similar simulations simultaneously, the hardware for the entire sim realty would be toast overnight.

What am I not getting about this?

Cheers!

78

If our simulated universe's framerate drops because of the extra compute required for the nested simulations we're running, would we even notice? It stands to reason that everything would slow down, including our perception of the universe.

For all we know, the smallest unit of time we can measure in our simulated existence could take an hour or more to render outside the simulation. To us, it's nearly instantaneous.

EVE Online (Video Game) uses a similar technology to handle large fights with thousands of players.

Just watch for graphics tearing. On a completely unrelated note, why are earthquake zones so heavily populated?

It’s a scenario that Neal Stephenson covers in his book “Fall; or, Dodge in Hell”. Interesting read, although it’s one of my least favorite books of him and I liked the first book in the “Dodge” series a lot better.

But if the real world sets up a simulated world which more or less perfectly simulates itself

This is the crux of the logical error you made. It's a common error, but it's important to recognize here.

If we're in a simulation, we have no idea the available resources in the simulation "above" us. Suppose energy density up there is 100x as high as ours?Suppose the subjective experience of the passage of time up there is 100x faster than ours?

Another thing is that we have no idea how long it takes to render each frame of our simulation. Could take a million years. As long as it keeps running though, and as long as the simulation above us is patient, we keep ticking. This is also where the subjective experience of time matters. If it takes a million years, but their subjective "day" is a trillion years long, it becomes feasible to run us for a while.

And, finally, there's no reason to assume we're a complete simulation of anything. Perhaps the simulation was instantiated beginning with this morning--but including all memories and documentation of our "historical" past. All that past, all that experience is also fake, but we'd never know that because it's real to us. In this scenario, the simulation above us only has to simulate one day. Or maybe even just the experiences of one PERSON for one day. Or one minute. Who knows?

The main point is we don't know what's happening in the simulation above ours, if it exists, but there's no reason to assume it's similar to ours in any way.

Quantum is weird. If we are in a simulation, that would explain a lot of that, because the quantum effects we see are actually just light simulations of much deeper mechanics.

As such, if we were simulating a universe, there's every chance that we may decide to only simulate down to individual atoms. So the people in the simulation would probably discover atoms, but then they would have to come up with their own version of quantum mechanics to describe the effects that we know come from quarks.

The point is that each layer may choose to simulate things slightly lighter to save on resources, and you would have no way of knowing.

Indeed and--interesting corrollary--if we accept the concept of reduced accuracy simulations as axiomatic, then it might be possible to figure out how close we are to the "bottom" of the simulation stack that's theoretically possible. There's only so many orders of magnitude after all; at some point you're only simulating one pixel wiggling around and that's not interesting enough to keep going down.

There is not, as far as I know, any way to estimate the length of the stack in the other direction, though.

I have never understood the argument that QM is evidence for a simulation because the universe is using less resources or something like that by not "rendering" things at that low of a level. The problem is that, yes, it's probabilistic, but it is not merely probabilistic. We have probability in classical mechanics already like when dealing with gasses in statistical mechanics and we can model that just fine. Modeling wave functions is far more computationally expensive because they do not even exist in traditional spacetime but in an abstract Hilbert space that can grows in complexity exponentially faster than classical systems. That's the whole reason for building quantum computers, it's so much more computationally expensive to simulate this that it is more efficient just to have a machine that can do it. The laws of physics at a fundamental level get far more complex and far more computationally expensive, and not the reverse.

To be clear, I'm not arguing that is is evidence, i merely arguing that it could be a result of how they chose to render our simulation. And just because it's more computationally expensive on our side does not necessarily mean it's more expensive on their side, because we don't know what the mechanics of the deeper layer may have been.

For example, it would be a lot less computationally expensive to render accuracy in a simulation for us down to cellular level than it would be down to atomic scale. From there, we could simply replicate the rules of how molecules work without actually rendering them, such as "cells seem to have a finite amount of energy based on food you consume, and we can model the mathematics of how that works, but we can't seem to find a physical structure that allows that to function"

As others have said, our reference of time comes from our own universe's rules.
Ergo if rendering 1 second of our time took 10 years of their time, we wouldn't measure 10 years, we'd measure 1 second, so we'd have no way of knowing.

It's worth remembering that simulation theory is, at least for now, unfalsifiable. By it's nature there's always a counterargument to any evidence againat it, therefore it always remains a non-zero possibility, just like how most religions operate.

You're thinking in terms of how we do simulations within our universe. If the universe is a simulation then the machine that is simulating it is necessarily outside of the known universe. We can't know for sure that it has to play by the same rules of physics or even of logic and reasoning as a machine within our universe. Maybe in the upper echelon universe computers don't need power, or they have infinite time for calculations for reasons beyond our understanding.

But that's just a guess. It's not necessarily true. You're just saying "simulations might be possible, therefore they are definitely possible, therefore we are likely in a simulation".

That's not logically sound. You can replace "simulation" with "God" and prove the existence of God similarly. It's just a guess.

Yes thats why no one says its a fact, its a theory

I’d just like to interject for a moment. What you’re refering to as "theory", is in fact, a "hypothesis"...

You are correct, but missed one important point, or actually made an important wrong assumption. You don't simulate a 1:1 version of your universe.

It's impossible to simulate a universe the size of your own universe, but you can simulate smaller universes, or to be more accurately, simpler universes. Think on videogames, you don't need to simulate everything, you just simulate some things, while the rest is just a static image until you get close. The cool thing about this hypothetical scenario is that you can think of how a simulated universe might be different from a real one, i.e. what shortcuts could we take to make our computers be able to simulate a complex universe (even if smaller than ours).

For starters you don't simulate everything, instead of every particle being a particle, which would be prohibitively expensive, particles smaller than a certain size don't really exist, and instead you have a function that tells you where they are when you need them. For example simulating every electron would be a lot of work, but if instead of simulating them you can run a function that tells you where they are at a given frame of the simulation you can act accordingly without having to actually simulate them. This would cause weird behaviors inside the simulation, such as electrons popping in and out of existence and teleporting over gaps smaller than the radius of your spawn_electron function, which in turn would impose a limit to the size of transistors inside that universe. It would also cause it so that when you fire electrons through a double slit they would interact with one another, because they're just a function until they hit anything, but if you try to measure which slit they go through then they're forced to collapse before that and so they don't interact with one another. But that's all okay, because you care about macro stuff (otherwise you wouldn't be simulating an entire universe).

Another interesting thing is that you probably have several computers working on it, and you don't really want loading screens or anything like that, so instead you impose a maximum speed inside the simulation, that way whenever something goes from one area of the simulation to the next it will take enough time for everything to be "ready". It helps if you simulate a universe where gravity is not strong enough to cause a crunch (or your computers will all freeze trying to process it). So your simulated universe might have large empty spaces that don't need that much computational power, and because traveling through them takes long enough it's easy to synch the transition from one server to the next. If on the other hand maximum speed was infinite you could have objects teleporting from one server to the next causing a freeze on those two which would leave them out of synch with the rest.

And that's the cool thing about thinking how a simulated universe would work, our universe is weird as fuck, and a lot of those weirdness looks like the type of weirdness that would be introduced by someone trying to run their simulation cheaper.

So we're getting Truman Show'ed, but on a scale assumed to be beyond our capability to investigate.

First, this is not really science so much as it is science-themed philosophy or maybe "religion". That being said, to make it work:

  • We don't have anyway of knowing the true scale and "resolution" of a hypothetical higher order universe. We think the universe is big, we think the speed of light is supremely fast, and we think the subatomic particles we measure are impossibly fine grained. However if we had a hypothetical simulation that is self-aware but not aware of our universe, they might conclude some slower limitation in the physics engine is supremely fast, that triangles are the fundamental atoms of the universe, and pixels of textures represent their equivalent of subatomic particles. They might try to imagine making a simulation engine out of in-simulation assets and conclude it's obviously impossible, without ever being able to even conceive of volumetric reality with atoms and subatomic particles and computation devices way beyond anything that could be constructed out of in-engine assets. Think about people who make 'computers' out of in-game mechanics and how absurdly 'large' and underpowered they are compared to what we would be used to. Our universe could be "minecraft" level as far as a hypothetical simulator is concerned, we have no possible frame of reference to gauge some absolute complexity of our perceived reality.

  • We don't know how much we "think" is modeled is actually real. Imagine you are in the Half Life game as a miraculously self-aware NPC. You'd think about the terribly impossibly complex physics of the experiment gone wrong. Those of us outside of that know it's just a superficial model consisting of props to serve the narrative, but every piece of gadget that the NPC would see "in-universe" is in service of saying "yes, this thing is a real deep phenomenon, not merely some superficial flashes". For all you know, nothing is modeled behind you at anything but the most vague way, every microscope view just a texture, every piece of knowledge about the particle colliders is just "lore". All those experiments showing impossibly complex phenomenon could just be props in service of a narrative, if the point of the simulation has nothing to do with "physics" but just needs some placeholder physics to be plausible. The simulation could be five seconds old with all your memories prior to that just baked "backstory".

  • We have no way of perceiving "true" time, it may take a day of "outside" time to execute a second of our time. We don't even have "true" time within our observable universe, thanks to relativity being all weird.

  • Speaking of weird, this theory has appeal because of all the "weird" stuff in physics. Relativitiy and quantum physics are so weird. When you get to subatomic resolution, things start kind of getting "glitchy", we have this hard coded limit to relative velocity and time and length get messed up as you approach that limit. These sound like the sort of thing we'd end up if we tried simulating, so it is tempting to imagine a higher order universe with less "weirdness".

Just to spin this a bit further, if we are living in a simulation, does it have a purpose? Sometimes I ask myself if the purpose of such a simulation for humanity could be to see how long it takes from the big bang to the creation of artificial life. Maybe our purpose is to create such artificial life that can travel to the stars, because as humans we are not really fit to do that. Maybe we are a mere step on the ladder of our universe's purpose.

Such a purpose would inform the constraints. If we are just "the sims" on steroids, then all the deep physics are absolutely utterly faked and we are just "shown" convincing fakery. If it's anthropological, then similar story that the physics are just skin deep. If it's actually modeling some physics thing, then maybe we are "observing" real stuff.

But again, this is all just for fun. It's not vaguely testable and thus not scientific despite the sciencey theme of it, just something to ponder.

It's simple - you cheat. In computer games we only draw the things you are looking at, and we only give the appearance of simulating the whole thing but the 'world' or universe is actually very limited and you can't visit most places. Sound familiar?

The fun thing about this is that we have evidence that this is how our reality works. The double slit experiment showed that particles change their behavior when observed. (Gross oversimplification and only under very specific circumstances but still extremely fascinating.)

I don't think you can approximate Turing complete algorithms though. And then you end up with a situation where the simulation is making these Turing machines out of other simulated components, so it's even more overhead then just giving the simulated agents direct CPU time.

Turing test has been passed by ai just recently as it happens. Our computational load is trivial in the scheme of things

Even that requires overhead

The real problems would be x^m computational issues. A finite number of ai running around on a finite amount of space are linear problems. Basically, very possible

I’ve always thought the sysadmin of our simulation must be really pissed that we keep inventing better and better telescopes.

The JWST probably cost him a weekend adding more nodes to the cluster.

More likely they were way ahead of that by setting the draw distance as the speed of light.

My issue it is similar: each "layer" of simulation would necessarily be far simpler than than the layer in which the simulation is built, and so complexity would drop down exponentially such that even an incredibly complex universe would not be able to support conscious beings in simulations within only a few layers. You could imagine that maybe the initial universe is so much more complex than our own that it could support millions of layers, but at that point you're just guessing, as we have no reason to believe there is even a single layer above our own, and the whole notion that "we're more likely to be an a simulation than not" just ceases to be true. You can't actually put a number on it, or even a vague description like "more likely." it's ultimately a guess.

Time doesn't have to be 1:1 between a host and a simulation. The host can take as long as it wants to render the next step in a simulation, and any observers within the simulated universes would not be able to discern the choppiness of their flow of time.

Fellow macaque here. Not only that, but time does not even run 1:1 between 2 places in our own universe. Plus, there are all kinds of quantum fuckery, where we can't really detect all the properties of a certain particle, or the particles act like waves as long as they do not interact with anything, because... who knows?

Particles and waves aren't actually separate as we were taught in school. They are in reality a third thing with properties of both.

As for detecting properties that's a limit of our technology not the universe. In order to observe something we currently have to interact with it (e.g. bounce some light off it) It's possible in the future we develop techniques that don't require interaction, like reading the higgs boson field directly for example.

If our technology is limited so we can never see beyond something, why even propose it exists? Bell's theorem also demonstrates that if you do add hidden parameters, it would have to violate Lorentz invariance, meaning it would have to contradict with the predictions of our current best theories of the universe, like GR and QFT. Even as pure speculation it's rather dubious as there's no evidence that Lorentz invariance is ever violated.

Who says there's resource requirements in the physics of the upper levels?

If someone has the resources to simulate a universe, they probably have the resources to stimulate an arbitrarily large number of universes. This also assumes that any civilisation within the stimulated universe reaches the level of technological advancement required to make a universe level simulation. We're talking, probably, whole networks of Matrioska Brains, that sort of thing.

What am I not getting about this?

The assumption is that the simulation runs constantly and at least as fast as real time.

Neither needs to be true. A simulation might be to see what would have happened if we made different choices, it might be a video game, it might be a way to gen TV shows based on "the historical past" that we consider present time.

We might just be an experiment to see if free will exists. Start 10,000 identical simulations to run a century, and at the end compare the results, see what's changed, and if those changes snowballed or evened out.

And just like how video games only "draw" what's in field of view, a simulation could run the same way, drastically cutting down resource needs.

And "impossible levels of energy" isn't really right. At a certain point a species can get a Dyson sphere. And once they get the first, every subsequent one is a cake walk. It's as close as possible to "infinite energy" there's no real reason to even go past one.

Hell, it doesn't need to be "everything" everything. Generate a solar system and as long as no one leaves, you don't need to generate anything past it other than some lights.

Not an answer to your question , but:

What if only one person is being simulated in full, and everyone else is just simulated for the moment they interact with that original sim? That would mean only one of us on this thread is the OG sim, and the rest of us only exist because we are/were going to interact here, and now.

i smoke too much weed for this topic

In the off chance I only exist to argue with you on the Internet, I feel like it's my duty to say you're wrong and have nothing to back up my viewpoint because the resources weren't allotted to have any supported data.

I hope I exist tomorrow.

I used to do that at parties. I’d find someone really drunk and say, “I don’t know why you picked me to say this, but we’re not real, you’re the only one that’s real, and we’re all afraid you’re about to wake up and we’ll all disappear.”

You are in a coma. We're trying a new technique to communicate with you. We aren't sure where or when this message will appear to you. You've been in a coma for 20 years. Please wake up. We miss you.

The argument with "it doesn't have to be a realtime simulation" is good.

Also: Why should the same rules of physics we have, apply to the world that runs the simulation? Maybe they have infinite energy and different physics. We can't apply our physics to other, different universes.

And we use a very small amount of electricity on earth. A few petawatts as far as I know. We can't even imagine what's possible for a civilization who harvests a substancial amount of energy from their sun. Or has nuclear fusion power plants available. That should immediately allow for a simulation a few layers deep.

And you don't need to simulate every molecule in the universe for a good simulation. Maybe there is a trick to it. For a computer game we also don't simulate atoms and real gravity. And it's believable, nontheless. So it doesn't even have to scale exponentially. There could be a way to make it much more managable and not make it much more complicated with every layer.

Strictly speaking you only need to simulate the state of mind and the sensory input of a few billion people. Or less. Or one person. If they choose to "build" a simulation themselves, it's just the things necessary for their perception that need to be handled.

I'd say IF we live in a simulation... It's most likely running in a world that has in fact improbably many resources available. And laws of physics that allow for that.

Each simulation level would be more intensive to run. There is a minimum level where a simulation can no longer be sustained. So there is definitely a finite number of sim layers that are possible.

To more directly answer your question, each level of simulation would run worse to compensate for the resource decrease.

I dunno. But I feel like perhaps I should go rewatch the 13th floor.

Loved that movie. Glad to see other people remember it 🙂

Came into the thread specifically for it and am appalled at how far down I had to scroll to find mention of it.

Such a great film that got sadly overshadowed by being released the same year as The Matrix.

It would take vast quantities of energy and resources if you were to do it real time, full time.

As in - in the simulation 1 minute could be 1 year outside the simulation. Assuming we can continue to use more energy sources, develop the technology to fully simulate a single reality, it wouldn’t necessarily have to be real time.

Inside the simulation, it wouldn’t make a difference

It's light-based computing.

So you make a framework, compound it into a big bang ball and then let it run. Afterwards, you analyze the imagery from start to finish or at whatever point you need to.

Can't interact with it though, only observe.

I understand the point your making, but what if the simulation was actually not shared at all?

Perhaps in this scenario the human brain is the only required hardware? Then there would only be one "base simulation" that is in fact just a basic set of prompts, rules, and initial visual stimulus that is then sent to each person in essence creating a whole separate simulation within each individual. Everything that happens after that is created based on how each individual reacts to the initial prompts. The main system would not have to create any new data to keep the simulation growing because the human mind would create and store all new information within itself. Each new person born would have all the additional hard drive and processing power needed to keep the simulation going for the rest of their lives.

Just consider that if the world as we know it is just a simulation, and that simulation is all we have ever known since birth, how would you ever know if the other people are real or not? Would it even matter?

You’ve basically hit the nail on the head. It’s pretty simple to argue based on information theory / statistical mechanics that a machine that runs the simulation has to support at least as many states as the thing its simulating, so a machine that simulates a universe as complex as its host would take up the entire host universe.

It’s a fun idea but ultimately it’s not at all scientific and shouldn’t be taken seriously.

Simple answer, our sim is an entire universe, so likely it's being run on a jupiter brain or matrioshka brain.

It could even be possible to run it on a Black Hole computer.

Basically all these options would allow well beyond the computing power needed to not just simulate a universe, but to handle individual people in that universe also running simulations.

In fact it may even be optimized for that to allow scientists to run simulations within the simulated universe for the purpose of experiments that need simulation but don't require an entire ulta-massive black hole's worth of computing power to get the results needed.

likely it's being run on a jupiter brain or matrioshka brain

if you accept the premise that our universe is a simulation, why would you assume anything we know about it follows the rules of the simulating universe?

A simulation will necessarily be influenced by the context it was built in, because that is the context in which the designer will consider the parameters and purpose of the simulation they are building.

We can conclude as a result that the simulating universe is at least in some ways similar to ours.

Or maybe whoever simulated our universe intentionally deviated from the rules of their universe for the fun of it.

Which still indicates ours being influenced by theirs.

okay?

"influenced by" doesn't tell us anything about the concrete rules of the simulating universe

No, the point remains though that the rules of their universe would be connected to the rules of ours. The simulation designer would not be creative enough to design a universe that didn't fall somewhere on the spectrum of having a complete inversion of the rules of their universe or having an exact copy of the rules of their universe.

The design of ours is constrained by the context which the designer is starting from, because there are natural limits to what would be conceivable even to the denizen of a universe completely different from our own in its make.

We can't infer the rules directly from this information, but we can draw conclusions about what they wouldn't be.

Like determining the inputs of a function by reversing operations and using the outputs of the original as the inputs for that...only a lot less exact because universal rules aren't (always) numbers.

Why do you think that an entity or set of entities capable of simulating the entirety of our existence would have their creativity capped in a way that’s meaningful to us?

We can't infer the rules directly from this information, but we can draw conclusions about what they wouldn't be.

Can you give an example of a rule for a containing reality that you think we could rule out?

The simulation designer would not be creative enough

now you're making assumptions about how the creators of our simulation think, when we also know nothing about them

why would you assume they think in the same way we do? why would you assume what they do would even be considered "thinking" by us at all?

fall somewhere on the spectrum of having a complete inversion of the rules of their universe or having an exact copy of the rules of their universe

  • you think the creator of the simulation is capable of specifying each rule of our universe, but not just inverting all of them?
  • "somewhere on the spectrum" includes positions closer to a complete inversion than not, so even if you take this as given - which you probably shouldn't - you still can't make claims with any certainty

there are natural limits to what would be conceivable

firstly, i don't think that holds true under the laws of our universe

secondly, why would it hold true under the laws of theirs?

I seem to remember that the common concensus is that it would be easier to create a whole universe than the individual simulation model.

Why do you think everything sucks right now? The simulations ran out of resources long ago so no new ideas are created, just rehashing of already established ones. It's why we have AI, but it can't generate anything new or useful. It's why we're about to have a restart of World War 2 and The Cold War. It's why movies that come out already are getting remade, but slightly worse than the original story.

People like to think that's because capitalism only caters to the safe bets. But we know better! It's really just that the old Apple 2e that's running the simulation(s) is low on resources!!

Now, where did I leave all that red string, pictures of Bigfoot and the Loc Ness monster, and thumbtacks?