Are We Just A Computer Simulation?

Well, better to say, “We don’t think we live in such a universe”. The “Last Tuesday” issue was mentioned earlier. No need to have it run for 100 billion years if you can boot it into a state where it is already “old”.

Plus, there’s the Arthur Dent’s Brain problem:

“We can replace your brain with a simulated one, a simple one should suffice.”

“Yeah it would be programmed to say, “Where are we? I’m confused. Where’s the tea?”, and who would know the difference?”

“I’d know the difference!”

“No you wouldn’t, you’d be programmed not to!”

Everything you’re talking about here - 100 billion years, the energy of the supernova, etc - seem large to us. You have no idea whether that’s a long time or a lot of energy or just plain trivial to the folks in the parent universe. You just can’t apply your sense of largeness to a universe you know nothing about. You also can’t assume anything about the mapping from our time to theirs - you say

but that’s not true, not even in our universe.

Here’s proof reality is not a computer simulation, that is, if you agree with the premises.

  1. All simulations are mere approximations of reality.
  2. No approximations of reality are the same as reality itself.
  3. Therefore, reality itself is not a simulation.

The debate then devolves to the age old question of whether we are real, and what is reality? The pragmatic answer is that reality is subjective - if it’s real to you, then so far as you are concerned it is reality. A host of metaphysical problems emerge from reality being subjective, but assuming it is, the titular question is deductively answered in the negative.

~Max

If only it were so easy. We’d love to be able to simulate a chip starting from just before the failure, but it is basically impossible. For testing, where you need to define the state at the beginning of a test, we need reset lines to put memory elements into known states. Haven’t seen one for the universe. I’d argue that it would be easier to simulate for 13 billion years than to place all the trillions of stars, planets, atoms of dust, etc. Plus, you need to add the light from the times that were not simulated but which any inhabitants of your simulation would see, stretching back 13 billion years.
Last Thursdayism assume an omnipotent god. If you are saying the simulation writers are omnipotent and immortal, fine, you’re just saying this is how God did it. But if you have lower level simulations, like in the argument, then the top level god would have to simulate lower level gods also, who can run the equally complex lower level simulations. It’s deities all the way down.

A second or an hour of simulated processor time are both small to me, but I’m always going to want to do the minimum necessary. If you want to define the simulation writers as magical immortals, all bets are off, but my reading of the argument is that it supposedly explains the universe without magic. And each simulator writer would have to be equally magical. That means no more than one per simulated universe, which reduced the odds that we’re a simulation by quite a bit.

Really? At a detailed level? True for a very high level simulation with not much granularity, but not for the details that we see for our universe. Give me an example, please.

You cannot extrapolate from one data point. Your arguments continue to assume that our universe has anything meaningful to say about a creator universe. This cannot be known.

All arguments about creators have to start from our total lack of knowledge about what such a super-creator is or is possible of. Otherwise you are just using a science-y version of “God created Man in his image.”

I can simulate an Apple ][ on my Core i7 9770 at at least an order of magnitude faster than the Apple ][ can run. Ditto for lots of 80s computer games – try playing Pac-Man at 100 times the normal speed!

My point is, for all we know, our 3-4 dimensional universe is a simplified version of the 10-dimensional “real” universe that our universe simulator is running in. It’s Conway’s Game of Life simple compared to their actual reality.

I agree with all the others that say that this is an unscientific question, since it’s fundamentally unanswerable – there’s no way for the Apple ][ being simulated to know that it’s not running on real hardware at normal speed. But some of the energy and complexity objections seems silly to me, since (if we are in a simulation) we have no way of knowing what the size, complexity, dimensionality, etc., of the “real” universe is.

Also, there’s no reason to think that we’re in any way an objective of this simulation – there’s no reason to simulate the whole universe that we perceive in order to get life on some little planet. The universe itself could be the project, maybe to see how some 3-dimensional atomic level simulation would change over time, and to give some 10-dimensional being some further insight into its own reality.

We’re like a tiny Glider pattern on a humungous Conway’s Game of Life simulation, that just happened to do something mildly interesting in one tiny section of this huge board.

No, you can’t. @Voyager has a point, as stated when he said that this is “true for a very high level simulation with not much granularity, but not for the details that we see for our universe”. You can approximately simulate the instruction set of an Apple II, although even that may have visibly different behaviours from the real machine for anything that is timing dependent. But in any event, you are not simulating the detailed behaviour of the real machine’s logic gates and physical characteristics. For instance, if you had a real machine, you might observe the processor temperature increase after running a particular sequence of instructions; you cannot observe this in your instruction-set simulator. Nor can you run the simulator under different environmental conditions and infer component failure rates. You can’t measure RF radiation from your simulator. Your “simulator” is just a very high level simulation of certain limited aspects of the Apple II reality that we happen to consider useful from a software standpoint.

I mean, it’s true that I can’t simulate this universe from within this universe, or even a small portion of this universe from within this universe at anything approaching real time, but so what? The simulator, if there is one, is not of “this universe”, just like my Apple ][ emulator is not on an Apple ][.

A Conway’s Game of Life can be used to create a Turing-complete computer, but running Conway’s Game of Life on that Turing computer will definitely run slower than the Game of Life that’s creating the computer to begin with.

I’ve heard this as an argument about why it’s supposedly most likely we are in a simulation, but to me, it seems like this argument suggests the opposite.

Suppose we are in a simulation, then our own created simulations cannot possibly create more computing resources than the computer in which our universe is hosted as a simulation.
If a civilisation arises inside our own simulation of the universe, it cannot possibly create more computing resources than the computer that we use to run it.
And so on, in ever decreasing stages.

So if we found that we hit a limit where we cannot create enough computing resources to simulate a universe, I would say that would be a compelling piece of evidence that we ourselves may be inside someone else’s limited simulation environment.

Sure, but to an entity within Conway’s game of life, whose perception of time is the steps in the game, our external perception of the length of those steps is irrelevant.

I totally agree. The emulated Apple ][ “thinks” it’s running at regular speed.

We don’t know how fast our simulation is running, since a time step is a time step, even if it takes a 100 10-dimensional years in real space.

I want to be totally clear here – I’m not saying we’re in a simulation. I think that question is unanswerable and, thus, unscientific. But, if we’re going to noodle about the implications of it, saying it would be too complex or run too slowly doesn’t make sense, since we have no idea about the universe that the hypothetical simulation is running in.

You seem to be limiting yourself to stepwise processing. It is much easier to calculate

y = 2x

than

y=\sum_{x=0}^{\infty} 2

~Max

Did you just break a mitzvah? (However one spells it.)

Eh, there are 612 more.

All you turkeys are P-zombies anyways, so, like, whatevs.

What is your benchmark for “reality”? The “simulation” you’ve spent the last…years…days…seconds…well we don’t really know how long you’ve been here do we?

What do you mean “we”? I know I’m real. At least I sense that I “exist”. I might be a brain in a jar or NPC in some videogame. How do I know if everyone else in the sim are “bots” or “avatars”? At suitable levels of complexity, is there a difference?

Just like “someone” who’s a figment of the simulation would post.

That’s what I was referring to when I mentioned level of detail. Your Apple II emulation was not at the transistor level. I simulated a Lockheed SUE microinstruction set emulating my old LGP-21 on a PDP-11 and it ran faster - but this was at the microinstruction set level, not the transistor level.
If the level of granularity we saw in our world was molecules or maybe a bit higher, then I’d buy that our universe could be simulated faster than normal speed. But not at the level of detail that we can see.

I’m not following you. Pretty much all simulators, except perhaps those for analog circuits, run stepwise, and one of the arguments for the simulation hypothesis (perhaps the best one) is that Planck time is a natural simulation step.
Try to do it all at once and you run into horrendous synchronization problems. Even high level simulators I’ve written need some kind of timing wheel, as it’s called, to schedule things.

That’s not necessarily the case, though. Some kinds of simulation can be empirically excluded, for instance, simulations that run on a computer like the ones we have: outcome sequences of quantum experiments are (1-)random, meaning they’re uncomputable—no deterministic computer can produce them. (Of course, indeterministic computers can, per definition, but in that case, one can question the meaning of ‘simulation’—the information content of our universe would essentially come from the parent universe, whose randomness is sampled—meaning, our universe might be more properly considered a part of the parent universe.)

It’s admittedly hard to detect these correlations, but Yurtsever has shown that one can always eventually exploit the hidden correlations introduced by trying to spoof quantum randomness with pseudorandom data to facilitate superluminal information transfer. So if we implemented that experiment, and found out we’re able to send signals faster than light, then that’d be strong evidence for the simulation hypothesis.

Incidentally, generically, if you want to simulate a universe, making it a quantum universe would be just about the worst way to do it (hmm, it might actually be the single worst way to do it: quantum mechanics, in some sense, maximizes the computational complexity of physical processes, post-quantum theories seem to get simpler again, if I remember correctly). That’s because a quantum state is an exponentially complex object, which is (part of the) reason quantum computers massively outperform classical ones on certain tasks. So if you wanted to save on computing power, you’d rather use a universe following classical rules, discretized to a certain level obeying the Shannon-Nyquist sampling condition, which will ensure that the discretization isn’t observable in-universe.

Of course, I don’t think simulating a universe in this sense is possible in-principle: computation is just a particular interpretation of the structure of a given system, which needs to be interpreted in the right way to even compute anything, much less produce conscious observers. Thinking computation could give rise to a mind, as I’ve argued, then is essentially confusing the map for the territory: you need to go beyond structure, beyond relation, and include intrinsic properties for there to be minds; but then, you’ve already left the simulation. (That doesn’t guard against Matrix-like scenarios, where there’s a concrete physical system—a brain linked to a computer, in the Matrix-case, but any other substrate would do—merely being fed faked input data, but these aren’t simulated minds, they’re real minds being fooled.)