Are You Living in a Computer Simulation?

Recently we had a thread going over what was called “The Carter Hypothesis”, also known as “The Doomsday Argument”.

Nick Bostrom of Oxford University has developed an argument with striking simularities, but with a different point and, he claims, with a more reliable method. Could we all be living in a computer simulation?

Main website. Specific paper. Less rigorous development.

I ran across this paper in a LiveJournal philosophy forum, but I think it is well-suited for GD as well given that we have at least one functionalist on board (though he mightn’t like labels) and several people who adopt a computational model of consciousness to the extent that they posit what is commonly known as weak AI. I think there are even a few strong AI proponents here.

In any event, the main thrust of the argument runs as such:

Given three possibilities:
1: The human species is likely to be extinct before reaching the “posthuman” stage (that of staggaring technological advance);
2: “Posthuman” civilization is unlikely to run simulations of previous existence (evolutionary history);
3: we are almost certainly living in a simulation

only (3) has any probability.

The argument proceeds by accepting the condition of “substrate independence” which states that there is no a priori or necessary reason why consciousness can only happen on carbon-based, biological brains. It then suggests that sufficiently complex routines run on a computer will enable subjective experience (basically the core component of consciousness). Finally, it suggests that these are within the scope of permissible computation, and that our future kin will have such technology available. Given our current propensity for simulation, it is inductively plausible that we will continue to do so, so choice (2) above proceeds to zero. In order to develop (1), he proceeds with “possible world” semantics in order to consider the set of all possible worlds for the purpose of quantifying that those who do run such simulations will contain vastly more conscious entities than those that don’t, or those whose technological development is stunted by extinction. So (1) is tossed aside by considering the set of all worlds with consciousness, and indicating that it is far more likely to be conscious in a technological world than otherwise. The final step, then, indicates that this being the case, there are more simulated consciousnesses than “real” ones, so (3) is the most likely scenario.

In order to finish the assertion, however, he crafts what he calls a “bland indifference” principle. This principle distinguishes two cases and their essential irrelevance to the issue at hand. Either the simulated consciousness is the same as “real” consciousness in which case our argument holds, or they are similar enough (though still distinct) and that the people in question have no information pertaining to which case they are in.

In the paper, he even addresses the afore-mentioned Doomsday argument:


That said, where does erl stand on the matter, being the OP and all? Well I’ll tell you.

First, I don’t think his dodge of the Doomsday argument is sufficient. In fact, all either requires is that (A) we know we’re alive and (B) we don’t know the future. Rephrasing the Doomsday argument in possible world semantics wouldn’t work, however, since the extreme increases in those that are alive would possibly outweigh all the cases where there is human extinction. In any event, the reasoning doesn’t quite seem to escape a striking similarity in that its conclusion is driven by incomplete knowledge and a choice function among apparently random sets of possible events (see the other thread for the arguments about it, let’s try to stick to the paper in question in this one).

Secondly, and perhaps more importantly, I’m mostly of the mind that it rapidly approaches nonsense to say that “I am dreaming,” or “we’re all in a computer simulation” (which, I might add, are somewhat common arguments against such skeptical hypothesis) I don’t have the feeling his “weak indifference” is that weak at all, and mostly I think this stems from confusing epistemic skepticism that we can “really know” how we exist with an equivlance (or should I say equivocation?) that states that this is how we exist. So no, the argument in that sense isn’t defeatable. Neither is solipsism. Whoopie.

What do you think?

Having gone over those links, I have to say I think you’re misrepresenting his argument slightly. It seems he is trying to prove only that one of the three hypotheses must be true; in none of the three links does he suggest that only the third is plausible.

That said, I agree with your first objection. He’s making the same assumption as in the Carter hypothesis, just with more words and fewer equations. His “bland indifference principle” rests on the assumption that the observer is chosen randomly from the set of all human minds, real or simulated. There is no evidence for such an assumption, and the only justification he provides for that assumption is that there’s no evidence to the contrary either. I prefer not to reason from a vacuum, thank you.

I also have problems with his mathematical arguments prior to that point. In particular, he states that “Because of the immense computing power of posthuman civilizations, N[sub]i[/sub] is extremely large” where N[sub]i[/sub] is the average number of human simulations run by civilizations which have the capacity and interest to do so. But the implication in that statement doesn’t follow, to my eyes. What if a given posthuman civilization can and does run human simulations, but they all run the same one? If a million people run the same standard version of Microsoft Human[sup]TM[/sup], is N[sub]i[/sub] equal to a million or is it equal to one?

(More generally, just because some post-human civilization can and wants to run human simulations, doesn’t mean that they will. Such an implication would be true for us, but I don’t see why it would necessarily hold for an arbitrary post-human civilization without making unsubstantiated assumptions about the temperment of people in such civilizations. But that’s more of a nitpick.)

I am generally quite reluctant to accept any argument such as this one that reaches great sweeping conclusions about reality starting from no observations whatsoever. This one is not as bad as the Carter hypothesis in that respect (mostly because of the “choose one of three” aspect of the conclusion), but Bostrom is still reaching into his navel and pulling out a lot more than lint.

That’s true, I have [intentionally] flexed the case a little in order to focus on (3). He ends the paper with, “Unless we are now living in a simulation, our descendants will almost certainly never run an ancestor-simulation” which is, I agree, a slightly weaker claim.

I should not have done that.

what drove my skepticism initially is the idea: what would happen when we become sufficiently advanced to run simulations of past human civilizations? perhaps then, we are actually a simulation of a simulation. or a simulation of a … you get the point.

so why is there any reason to suspect we are a first-order simulation (let’s call it) any more than a nth-order simulation? and if it is more likely that we are a first-order simulation than actual reality, would it not then be more likely that we are a second-order simulation? so then, the higher the n, the more likely we are an nth-order simulation. so n tends to infinity. which seems absurd.

perhaps later i’ll strike down whatever assumptions exist that lead to that absurdity, but right now i’ve got work to do.

Have you read Forever Free by Joe Haldeman?

What does it mean to be “living in a computer simulation” as opposed to “living in the real world”?

I don’t believe there’s much of a difference; hence, the distinction doesn’t mean very much.

Have you ever looked at your hands?

No, I mean, have you ever LOOKED at your HANDS, man?

As far as the nth level simulation goes, Ramanujan, I don’t think the recursion holds. In order to have n levels of simulation like that, technology would have to have an almost unlimited possibility. That is, to assume that I’m the simulation of a simulation of a simulation ad infinitum would require such a large gap in technology between (say) level one and level three that the bland indifference would no longer hold; i.e., there would no longer be sufficient similarity (which is what the argument hinges on).

No, KellyM, in fact I’ve never even heard of it.

Orbifold, I do wish to say, however, that the author’s bias towards (3) is not evident in the paper itself, but exists. If you look at the third link you’ll see why I chose the interpretation I did.

Given human’s propensity for modelling events in both a scientific and gaming context I think it is clear what led him to develop the argument. Indeed, in his summary, when he reaches the point where he says all three possibilities should be considered roughly equal due to lack of information, he then goes on to say, “Let us consider the options in a little more detail.” Why, if not to show his ideas regarding (3)? :wink: (Which he does.)

TVAA, the thing about an intentional simulation versus your ordinary brain-in-a-vat scenario or experience machine is that our knowledge of the simulated world is guaranteed to roughly obtain in the real world as well. The implications of this are that many more possibilites open up. A simulated life, for example, with subjective experiences might very well have an afterlife as “its” processes were reused in various simulations. Other ideas might lend one to the conclusion that life is just a game, and so we should try to have as much fun as possible (i.e., a drive to hedonism would gain much firmer ground). Metaphysics that discuss “things in themselves” would also have a higher priority since, indeed, what we experience is definitionally ideal but what these idealizations are based on lie behind the idealizations themselves. Polytheism and monotheism would gain more foundation (the god(s) of the machine!). Etc.

I don’t think its implications are entirely trivial.

I’ve asked this question before, because there are curious things about the universe that do resemble the properties of a simulation.

Take for example quantum uncertainty and the observer effect - if I was writing a bit of software to simulate a universe and calculation resources were at a premium, I might well devise a system whereby everything is expressed in vague, general terms until it is closely observed, then the simulation would render those calculations at a higher resolution on demand.

Or take the the double-slit diffraction experiment that is used to demonstrate interference and the wave properties of light - if I was to mathematically model the behaviours of light, I might apply a set of rules that describe the bulk behaviour of light, not being too concerned that they would still apply to individual packets of the same data, but that this doesn’t really make sense (single photons, when passed through the double slits, behave as if they are interacting with other photons, even though there aren’t actually any there).

I could go on about relativity and a few other things besides, but I won’t - to conclude that the universe is in fact a simulation on these kind of grounds would be hasty - it is much more likely that they are just borne out of general confusion over the non-intuitiveness of quantum physics.

They’re onto me. This will probably skew the results. I’d better pull the plug on this one and reboot.

The problems I have with the possibility of living in a simulated universe are more computational than philosophical in nature.

The work of David Wolpert at the NASA Ames centre has defined some theoretical upper limits for computation. The upshot of his work seems to be:

1:
The most efficient simulation of the universe is the universe itself. It is impossible to build a complete simulation of the universe inside itself.

2:
It is impossible to build a complete simulation of any part or subset of the universe such that the simulation runs fast enough to predict events before they actually happen.

This means that any simulated universe (in which we may or may not be living) will be necessarily smaller (in terms of information content rather than size) than the universe in which its hardware is built and will necessarily run slower than the actual physical processes in the parent universe.

Consequently the “posthuman” simulation builders would be watching us live in slow motion (probably slower by several orders of magnitude) in a relatively small toy universe.

This can be used to place a guesstimated limit on the depth of the simulation in which we are living - for example, if the simulation speed difference was a factor of ten then the sixth level of nested simulation would be running a million times slower than the top-level unsimulated universe. Go much deeper than this and the original “real” universe would live, expand and die in a subjective few seconds as experienced by the simulation dwellers. The top-level universe’s death would, of course, take all the simulated ones with it.

Then there’s the questioin of resources devoted to the simulation. Wolpert’s work shows us that even if we devoted all the energy and matter in this universe into running a simulation (even to the point of creating an infinitely dense, infinitely fast computer with capabilities beyond the set of Turing completeness) we couldn’t create a complete simulation of our own universe. This raises the question: what level of resources would the posthumans be willing to devote to a simulation? Fifty percent of their universe? Ninety percent? At almost any level sufficient to simulate the universe we observe it would require an immense devotion of resources, or would imply that the universe in which our simulation hardware is built is significantly larger than the universe we can see.

So, any given simulation would really need to take some shortcuts. It’s always possible that only the immediate vicinity of Earth is simulated in detail and the rest of the universe is the equivalent of a painted backdrop.

As we expand the areas we observe in detail and, possibly, inhabit then we expand exponentially the resources that any simulation computer would need to consume in the universe in which it is built. At some point we would have to accept that either we are not living in a simulation because it would require an insane amount of resources to run or that the parent universe in which the simulation hardware is built is of a fundamentally different nature to the one we apparently inhabit. In the second case we’re probably not talking about posthumans as the simulation builders, but something else entirely.

Since you’re not supposed to know this, your part in the early-21st-Century sim will be terminated.

No, seriously, my response is twofold:

  1. Most truly self-aware conscious minds within sims will know they are within sims. So if you have to ask, you’re probably not.
  2. With the disappearance of cheap fossil fuels in the late 21st Century, full computer simulation became economically prohibitive. The only reason this sim is running is that we have massive solar collectors set up on Mercury, & we had to lie, cheat, kill & steal to get it built. So, no, it’s not that common.

Not too shabby.

Armilla: I think the paper posits a simulation of our “evolutionary history”, which is a great deal smaller than a simulation of the entire universe. (Your post was interesting and worth reading, however.)

Question: Why does the author assume that comprehensive simulations of, say, Earth 2003, will necessarily produce one or more examples of consciousness? Until we develop a good model of consciousness, it isn’t clear that this would be the case. As an example, my tape recorder may at times quack like a duck, but that doesn’t mean that it is a duck.

It may (or may not) be the case that generating consciousness is sufficiently expensive that future simulations typically bypass that cost.

I suppose this argues for modifying option 2 somewhat.

But what do I know? I’m just a simulation programmed by pravnick. Oh, and the winner of the 2004 election is <urk! Bzzzzzzzz[sub]zzzz…tz. txz. ptcht[/sub].>

Ack! Armilla: Not only was your post “interesting and worth reading”, it directly addressed the point that I made. Sorry 'bout that.

I have no objection to point one – it’s what I’ve been telling you people all along! – but point two isn’t so much incorrect as incomplete.

It is impossible to build a perfect emulation of any deep system. It’s perfectly possible to build emulations of higher-level systems, but they won’t necessarily behave identically to the original or each other. It is possible to build models capable of predicting certain aspects of a system in advance, but significant levels of uncertainty will always be present.

That paper is a doozy. I downloaded it last night but got caught up watching Sphere on FX (guess we all see where my priorities lie, I’ve even already seen the movie and wasn’t too impressed compared to the book! :stuck_out_tongue: ). I’ve printed it out today and intend to fine-tooth comb it to see how closely it applies to the question at hand, which is not one of identical simulation but similar simulation. Clearly we face diminished returns on arbitrarily close simulation, so the end result of this paper is (at least for me) intuitive, but it is not strictly clear at this time how much it impacts the question at hand. A more concise comparison could be done if we ever hone down what exactly we mean by “similar” and how much variance in manifestation (or representation) we can accept and still feel it is “close”.

Thanks for the link, Armilla.

This is, in fact, covered in Bostrom’s (2).

It’s quite true that most of the conclusions of the paper I posted are intuitive, but some of them aren’t unless you really think about it. It is, at least, a clear demonstration that any simulated system must be both smaller and slower than the system that contains it.

I read the Bostrom paper and, with regard to the resources he allocates to running the simulation, I think he’s underestimating. While I admit I didn’t give it a really deep reading, I got the impression that he missed a step in the calculation of resources.

He talks about the resources needed to run a simulation of a human brain, but not about the resources to simulate an immersive environment for the mind. If we are indeed products of a simulated reality then there’s more at work than just simulating our minds - there’s a whole world out there.

It’s true that when we assume that our simulation is an approximate one then some of the problems disappear. However we, as a race (simulated or not) are probing into the fabric and nature of the universe with scientific tools like particle accelerators. This suggests to me that the simulation would have to be pretty deep and complete in order for it to be able to support these activities. On this basis you would probably need a computer of a size approaching that of the Earth in order to run the simulation we perceive at a level of approximation lower than those we can currently probe experimentally. And this would just simulate our immediate environment (rest of the universe as a painted backdrop etc).

The theory also opens a narrow window onto some possibilities for proving that we are living in a simulation. For example, imagine that we find a bug in the simulation program that allows us to run computations directly on the simulation’s hardware. This would allow these computations to complete in shorter times than Wolpert’s work suggests is possible. This theoretical “Talford Hack” would be strong evidence that we are running in a simulated system (while at the same time running the risk of crashing the universe - nothing’s ever easy).

I still think one of the most productive ways to approach the question is by looking at the theoretical limits of computation rather than playing philosophical odds. It’s remains a fascinating arena to play thought games with though, whichever way you approach it.

I wasn’t aware anyone was actually arguing the point, but I admit I am not in every thread.

Why would our descendants build simulations at all? Is it some future version of dot-com investing?
[list=1]
[li]Create simulation of past human epoch[/li][li]?[/li][li]Make money![/li][/list=1]

If our descendants will still fall for that, how evolved could they possible be?