Why Our Universe is Almost Certainly Somebody's Simulation

Despite the title, this blog post is really here as a warning to others about trying to reason from first principles about the nature of our Universe...

We have already done sophisticated simulations ourselves that do the large scale simulation of structure in our Universe. Yes, this is for a small fraction of the total Universe, and, yes, it only simulates down to galaxy size. (That is, it doesn't simulate small enough areas to model the formation of stars or anything like that.) But the point remains that we're simulating the Universe.

We've also done "artificial life" simulations, in which we've shown that computer programs evolve. One of the most interesting physics colloquia I heard as a grad student at Caltech in the first half of the 1990's was about the Tierra project, that created an "environment" in which small computer programs competed for resources. It had a mechanism for random mutation, and over time the programs evolved to become more efficient.

Take these simulations, and allow for computer power to continue to improve as we've seen it improving in recent decades, and it really doesn't seem too much of a stretch to imagine that we could create a simulation of the local area of a galaxy that has the computer power necessary to evolve very complex "organisms"-- simulated organisms, that is. Code in the basic laws of physics, and give them enough space and computer power to evolve, and it could just happen.

Now, consider our Universe as we've observed it to be. We only know of one life-bearing planet, but we do know that there are lots of other planets in our Galaxy. And, looking out there, within the observable Universe (not even considering things so far away that light hasn't had time to reach us since the Big Bang), there are something like 100 billion galaxies like ours. Given how tenacious life is on this planet once it got started, even if it's rare for it to get started (say even only one or a few instances in our own Galaxy), there are certainly other planets out there with life on them.

If we use our statistical sample size of one, so far we see that a few hundred years out of a few billion years of evolution includes a technological civilization. That's a small fraction... but given the number of stars in our Galaxy, and the number of galaxies in the observable Universe, it means that there are other technological civilizations out there, somewhere. Let's assume that an appreciable fraction (i.e. anything more than an infinitesimal fraction) of these civilizations eventually are able to produce computation able to make the kinds of simulations I'm talking about above. It's been for the last 7 billion years or so that it's reasonable to suppose that life could arise on planets like our own. During those 7 billion years, there have almost certainly been lots of these simulations run. (What does "lots" mean? Well, the numbers going into this are very uncertain, of course, but it's probably somewhere between hundreds and billions.)

In other words: for one observable Universe like our own, in which intelligent creatures could arise, there are many simulations in which intelligent creatures could arise. Thus, any given civilization of intelligent creatures is by far more likely to be within one of the simulations rather than in the real Universe.

So, we're probably all part of somebody's simulation.

Right. Do I really believe that? No. I mean, maybe, but if so, so what?. If the simulation were done well enough, though, we'd have no way to tell the difference. So, at some level, trying to decide if we're somebody's civilization or if we're in a real Universe is mental masturbation. We see a Universe out there which has physical laws that are Universally obeyed. The process of science has a great track record in explaining how this Universe works and predicting what we will observe. Hence, it makes the most sense to go with the simplest explanation, that there is a real Universe and we are working on understanding it. If certain things about our Universe seem improbable from first principles-- for instance, why are the densities of Dark Matter and Dark Energy so close?-- it's worth thinking about whether that's a pointer to something deeper. But, at some level, the Universe is what it is, and it's worth trying to understand it without getting hung up on fundamental probabilistic arguments that lead us to thinking that none of it means anything anyway.

13 responses so far

  • This is great and just what I have thought for a long time. Glad to know someone much smarter then me thinks the same way.

  • rockandrollsteve says:

    So that would mean God is really just geeks in a lab.

    I like it.

  • dg says:

    Nice post, Rob. Worth mentioning, of course, that the Millennium Simulation was run exclusively with non-interacting dark matter (proxy) particles. The really interesting stuff (gas physics, stars, planets, etc.) is still mostly beyond our capacity to accurately simulate, not just due to the scale, but also because of the complications involved with modeling the interactions.

    Simulating something like the evolution of complex life in a region of a galaxy would require an increase in computing power that is pretty hard to comprehend. In practice, a long way off (if ever).

  • Charles Hixson says:

    I think you've underestimated drastically on the number of simulations, but it *is* worth noting that all of the simulations are, and must be, drastic simplifications of the implementing universe. (I don't believe in the possibility of the arguments which argue that a ring of simulations is consistent.)

    Some people have claimed that one could handle the simplification purely by running the time more slowly within the simulation, but I don't believe that this would be practical either.

    There does exist one other alternative, the plausibility of which I'm uncertain of. I.e., some virtual machines run more quickly by "translating" the code that they're running into native code. If a simulation could manage that, then the slowdown/simplification would be reduced to a constant. (Not a constant factor, but a constant amount per pre-compiled chunk.) I don't really believe, however, that this could be possible. However ...

    The conclusion from this would necessarily be that if we are within a simulation, we are in a universe which has natural laws that are simplifications of the natural laws in the universe within which the simulation runs. I.e., that the natural laws that we see are not the real natural laws, but merely descriptions of the simulation as seen from within.

    This raises many questions, e.g.:
    How secure is their funding?
    What is their backup policy?
    How do they fix program crashes?
    What are their coding standards?
    etc.

    I don't find myself able to really believe this. Certainly the logic won't work, because what you observe (galaxies, etc.) is the simulation. So you don't know anything about the universe within which the simulation is run. (Is this more like Tierra or more like Eve-OnLine?)

    N.B.: If it's like Tierra, it depends on grant money. If it's like Eve-OnLine, it depends on attracting paying game players. In either case the simulation is likely to be turned off if it becomes boring.

  • rknop says:

    dg -- yeah, the Millenium simulation is only on the scale of galaxies at most, as I was indicating. (OK, it gets to smaller scales, but not by a whole lot.) However, I don't know that simulating the rise and evolution of organic life really is all that far off. I cited the two examples of simulations as examples that we can do these *sorts* of things. We clearly haven't done simulations ourselves yet that includes self-aware entities (unless the NSA really isn't talking). And, there are philosophical questions regarding AI. (AI is sort of like effective nuclear fusion power; AI has been 10 years off for 40 years, and effective nuclear fusion power has been 25 years off for 40 years.) I suspect that if we ever create a "real" AI, it will evolve as such, not be designed as such. But I don't think that the computational power needed to do that kind of thing is unimaginable.

    Charles -- re: the number of simulations, I was just talking about the ones that might include self-aware entities. If you mean *any* simulation of any "world", yeah, I've vastly underestimated it, and we've already done huge numbers ourselves.

    As for slowdown/speedup, the rate at which the simulation runs and the rate at which time is perceived inside the simulation will almost certainly not be 1:1. That's already true with the simulations we run right now-- e.g. the Millenium simulation I link to covers several billion years of time, but clearly didn't take that long to run.

  • MadameThespian Underhill says:

    I really like that there are people like you folks in our world who can think of these things. The possibilities are seemingly endless (to my finite mind at least) and it's nice to be able to mull over some of them. Thanks!

  • Arnd says:

    Everybody who considers that we may live in a simulation always proposes that it's a perfect simulation. That does not need to be the case. It's unlikely for me... no technology is perfect. So it really is a scientific question that may be able to make predictions, depending on what kind of simulation you think of. If it runs on computers like our own (just much bigger and better) then there are limitations.

    1) The level of detail may very well not be infinite. It may be possible to determine what level of accuracy natural laws actually have. Finding those limits would be evidence for a simulated world.

    2) There may be an admin interface for people who want to visit the simulation. This would be cool to discover :o).

    3) In a simulation, anything can be possible. Natural laws can be broken intentionally by the simulation admins.

  • GrayGaffer says:

    The level of detail of simulation only has to be sufficient for the awareness of the entities living in it, and only insofar as each is currently paying attention and to what. We are already using this principle in hi-res flight sims by only rendering in detail the area where the eyes are looking and leaving the rest in lower resolution.

    We do not actually have atom-level detail for anything outside of our own solar system, and even within it apart from a few highly localized sample points our pixels cover meters at least.

    We do observe a universe that appears to obey a pretty simple set of rules (for some values of "simple", that is:). And note: it is only what we perceive that requires rendering, what is "out there" need not be so precise. Even if "out there" is an oscilloscope supposedly rendering pico-second events for me to observe.

    The time scale of the simulation need bear only a very loose relationship to the time scale of our perceptions. In fact, we can get beyond quantized to apparently continuous perceptions with a frame rendering rate of around 50 per second. So what if it takes our supporting machine 10 seconds to compute each 20 msecs slice if my experience? I would have no way of perceiving that. Worse than trying to look at my own retinas without a mirror.

    So, ultimately, it makes little practical difference. I am happy with my personal compute resources. Do I want the red or blue pill? A question I have not really answered in all the years since it was first asked. And the sysadmin's backup policies? funding? Well, I would like to keep doing this as long as I can, but if the system fails or the plug is pulled I would not be aware of it so again makes little difference.

  • rknop says:

    GrayGaffer said what I would have said re: resolution and such. The simulation can be very coarse as we look far away, as we can't see things that well-- but it can also adapt and improve as necessary, similar to how grid-based fluid dynamic simulations already work.

  • Many years ago I posted on sci.physics a demonstration that many features of our Universe could be explained on the assumption that it is a simulation on some rather large computer. I can't remember all the details, but here are the high-lights.

    Uncertainty Principle: The state of every particle is held in a fixed number of bits, so the more bits taken up by position information the fewer bits left for momentum.

    Finite speed of light: A kludge to prevent overflow for fast moving objects.

    Event horizon around black holes: A bug. We are getting underflow to denormalised numbers. The consequential rounding errors lead to Hawking radiation.

    Clairvoyance: Every now and then the simulation is backed up to tape. Occasionally the system fails and must be restarted from one of these tapes. Unfortunately the memory isn't properly cleared on the restart, so information from later in simulated time is available early. This explains why no one can use clairvoyance at will; it's not something you do, it's something that's done to you.

    I can't remember the rest.

  • andy.s says:

    So, if somebody throws an exception, the entire universe might collapse?

    Everybody! START! BEHAVING! LOGICALLY! RIGHT! NOW!

    Ooops, I just SEGFAULTed. sorry, everyb-

  • Jay Double says:

    That is the reason we cannot access dark matter. These hidden or "non interactive" files are not accessible by the end user(us). I wish I had administrator authority and cheat codes. Maybe the Kardashians have the cheat codes....................

  • [...] know completely). There is a way by which they can probably understand the concept and that is by creating a simulated universe of their own. They are using their intelligence to create interesting things. They’ll simulate a universe [...]