This is all true. You don’t need to simulate everything, just enough to perform whatever kind of experiment / game the simulation is for.
Which means…me sitting around bored at home, in a cycle of making myself something to eat, playing video games and going to the toilet is a crucial part of the simulation, necessary to simulate in detail and waste countless cycles on.
Even allowing for the fact that the “architects” couldn’t think of a better reality to create in the first place, surely they wouldn’t simulate every mundane, repetitive action every time?
Once again, thank you for your detailed and illuminating reply. Incidentally I was just reading this interesting article about the “delayed choice experiment” once proposed by Archibald Wheeler, in which your Mach-Zehnder interferometer once again makes an appearance in a starring role!
I accept that the last quoted statement is certainly true. But unless I’m misinterpreting you I’m not sure about the former – the lack of controversy.
Forgive my feeble understanding and I’m going out on a bit of a limb here, but as I understand it efforts at establishing this determinism haven’t got very far and both sides of the camp are at least still controversial. I’m thinking here of the Bohm interpretation which posits hidden variables that somehow encode definite but unknowable values for things like the position and momentum of an electron. It seems that John Bell was instrumental in largely invalidating that sort of thinking in his proposal of the Bell theorem:
Bell was certainly an opponent of superdeterminism, which is closely related to some of my comments about whether our futures are truly predetermined and truly knowable, given only enough information.
It states in its abstract that “…The present results, in excellent agreement with quantum mechanical predictions, lead to the greatest violation of generalized Bell’s inequalities ever achieved” and is certainly believed by some to be reasonably conclusive – if not definitive – evidence that certain aspects of quantum behavior are truly random.
Is it taking it too far to say that the Aspect et al. paper is evidence of non-determinism?
Here’s another interesting piece. It coincidentally also touches on the Bell theorem and seems to be along the same lines.
So I would again stubbornly say that it’s entirely plausible that even your Mach-Zehnder interferometer experiment is only demonstrating that in the appropriate circumstances photons prefer to act like waves, and in others, like particles, but that might be all it proves – but wait! It occurs to me that maybe we’re both right! Maybe there’s another way to describe it. How about this: quantum evolution is deterministic, just as you say, but quantum collapse is not. There is true randomness at the point of decoherence. Thus the quantum world evolves deterministically, but whenever it’s disturbed and forced to collapse and interact with our macro world, the behavior is entirely random. This is kind of an appealing view because it still provides the basis for a percolating random underlay to the universe that makes the definite prediction of future events with complete accuracy impossible, and supports philosophical notions of free will.
Bell was, in fact, a proponent of the Bohmian interpretation, even formulating a generalization for particles with spin in a relativistic context. His theorem does not invalidate it, but rather, merely establishes its nonlocality.
The controversy you mention exists regarding the question of determinism in the measurement process. The usual view is certainly that the collapse is random, while interpretations like Bohm’s and others replace it with a deterministic evolution of experimentally inaccessible parameters, so-called ‘hidden variables’ (which, as Bell’s theorem shows, can’t be locally defined, but must interact nonlocally).
Yes. Bell’s inequalities have nothing to do with determinism: they are obeyed by any theory which has local hidden parameters. That quantum mechanics violates them thus means that there are no such parameters. This may mean that certain quantities always have definite values—like position in Bohm’s interpretation. But then, they must be subject to nonlocal interactions, in Bohm’s theory mediated by the quantum potential. Or, there may not be any such parameters (which is the usual view). But even in that case, the theory may still be deterministic or not, depending on whether there’s a collapse or not.
That’s what I’ve been saying. Also see again the wiki I linked to earlier:
[
[QUOTE]
Consistent with Heisenberg, von Neumann postulated that there were two processes of wave function change:
[ol][li]The probabilistic, non-unitary, non-local, discontinuous change brought about by observation and measurement, as outlined above.[/li][li]The deterministic, unitary, continuous time evolution of an isolated system that obeys the Schrödinger equation (or a relativistic equivalent, i.e. the Dirac equation).[/ol][/li][/QUOTE]
](Wave function collapse - Wikipedia)
Thus, there’s two modes of evolution: measurement, and whatever happens between measurements. We know that, in ordinary quantum mechanics (excluding modifications like the GRW theory), the second process is always deterministic. Regarding the measurement process, the question of determinism depends on the interpretation: whenever there’s a collapse, you have a probabilistic theory, but when there’s no collapse, you have determinism (roughly).
Decoherence actually only gives the appearance of collapse within ordinary deterministic quantum evolution, by including the effect of the environment. Taking the totality of environment, measured system, and measurement apparatus into account, the whole thing is perfectly deterministic, and the appearance of indeterminism comes in through ignorance of the detailed state of the environment.
I was delighted to read the above, and thought I actually had a useful insight. But then you threw this zinger at me…
So I’ve done more reading and banging of head against wall to see if I can come up with a conceptual explanation about quantum determinism that I could get you to accept. Now to see if I’ve succeeded, by dialing back my non-determinism claim one more notch. It begins with this definition from Wikipedia:
… decoherence is the mechanism by which the classical limit emerges from a quantum starting point and it determines the location of the quantum-classical boundary. Decoherence occurs when a system interacts with its environment in a thermodynamically irreversible way.
I found an interesting description of decoherence as the establishment of entanglement-induced correlations between the system and the infinite degrees of freedom of the decohering environment. If one posits that the process increases entropy and is thermodynamically irreversible (per the above), isn’t it then non-deterministic?
Now, to your point as quoted, the key phrase being “totality of the environment”. Perhaps one could say that the evolution of the total environment is always deterministic, the “total environment” being the pure state of zero entropy – i.e.- the entire universe, which has nothing else to entangle with. So here it comes:
The universe in its totality is always deterministic, but any observable quantum decoherence is always locally observed as random.
If that makes any kind of sense I will stop beating my head against the wall. However, it does raise an interesting question about what “the universe in its totality” actually means.
It also seems to imply a really fundamental role for quantum entanglement – not merely a strange quirk of quantum behavior, but fundamental to the nature of the universe.
Why would it be? Entropy increases in perfectly deterministic processes, as well. Consider the classic example of a gas whose molecules are completely contained in one half of a room; if one removes the dividing membrane and allows the gas to spread, entropy will increase. However, the “molecules” in this example may just as well be tiny billard balls bouncing off one another, completely deterministically.
Thermodynamic irreversibility really just means that there are a lot more ways for a system to evolve in one direction than in the other, simply because there’s many more different states of the system in one direction. In the gas example, there’s many more ways for the gas to be distributed across the whole room, than contained in any given part of it. So whenever the system undergoes some state change, it is much more likely that that change is in the direction where there’s more states for the system to occupy, meaning where the entropy is higher.
But every once in a while, entropy does, in fact, decrease; it’s just that the time it takes for that to happen (the Poincaré recurrence time) exceeds the lifetime of the univers for typical macroscopic systems.
But this is true of any (non-closed) system, in classical just as well as in quantum mechanics. Even in a billard game, when I keep track of only the balls in one half of the table, I have indeterminism, because I can never predict when, or if, a ball from the other half will enter and interact with the balls in my half.
Really, the only way to have indeterminism is to believe in a true wave function collapse (decoherence only yields apparent collapse, and is often proclaimed as the mechanism for the ‘splitting’ of worlds in many-worlds types of interpretations—which, as you recall, are deterministic). The fact that things like the Bohmian interpretation are possible shows that there is nothing in quantum mechanics that can force you to accept indeterminism; the theory simply does not settle the point.
If we are inside of a simulation, we have no idea whether the simulator is actually running quickly or slowly since we have no external reference.This source makes a similar observation, and estimated on the order of 10^36 computational operations can simulate all of human history. Therefore a suitably large computer (on the scale of planet-sized) could come up with a “solution” in well under a second. But the simulatees have no idea how fast they’re being “run” since they do whatever they do each simulated time-step. If God “paused” the simulator for a few thousand years of Real Universe Time and then hit resume we would never notice anything amiss.
We also have no idea what is “normal,” since again we have no external references for comparison. To use our video games as an example, if our world is simulated and one of the “Sims” people somehow is able to enter our world a-la TRON; it will seem strange to them since SimWorld is all they’ve ever known. It’s possible that the “real world” is similar to ours and possible that it’s different in bizarre ways, although I would propose a bias for our programmer overlords to be simulating things that they have a priori experience with.
The presence of randomness does not mean we’re not in a simulation. Lots of game simulations have randomness, starting as simple as when the saucer appears in Space Invaders to complex interactions that invoke a “fudge” variable to make things interesting or to fully search a solution space Monte-Carlo style.
I would argue, in fact, that most realistic games and many if not most simulations do in fact have random elements because that’s the nature of the world as we perceive it. Whether this is true in “reality” or not has been the basis of my recent back and forth with Half Man Half Wit which has been very educational for me (I now accept deterministic quantum evolution) but which I think remains a little open-ended. I obsess about it because I want to know if my future is predetermined or not – if it is, I may as well just stay in bed all day.
Ah, but in a classical system you can easily define the boundaries of the system. In your example, half of the billiard table would indeed appear quite random, but one has only to extend the boundary to the whole table to show its determinism. Where are the boundaries in a quantum system – which could be entangled with a particle arbitrarily far away?
A bit of a philosophical ramble, if I may, further on this point of randomness and simulations which Thrasymachus has set me off on again. Someone mentioned previously the idea of a deterministic universe in which everything has already happened; this actually aligns with a real mathematical model that Stephen Hawking proposed called Euclidian space-time, in which time is truly a space-like dimension and space-time is finite but unbounded, like the surface of a sphere. Which, as an added bonus, eliminates paradoxes from the Big Bang by reducing it to an ordinary coordinate.
This reality isn’t accessible to us, but perhaps it’s the only one that is completely deterministic (indeed the word “determinism” has no meaning in such a model). Perhaps our version of reality, featuring the flow of time and entropy, will always appear to us to have elements that are completely random. To take this back to my joke about God running a simulator, it would be analogous to a simulator running pseudo-random number generators. The simulator is completely deterministic (if you take a memory snapshot at any instant and run it again, it will produce exactly the same results). But the fate of the simulated beings is non-deterministic in their reality, and they can never tell the difference.
Both in classical and quantum mechanics boundaries are well defined until the system interacts with something else and changes its state as a result. In quantum mechanics, systems may not be local, but that doesn’t impinge on the point; the key is that you need local interaction to generate entanglement.
And exactly when the system interacts with something else, the apparent indeterminism comes into play: when the billard ball is struck by a player, who himself was moved to do so by some other external causes, and so on. You find yourself forced to extend your system boundaries further and further down the causal chain: it’s interaction, not boundaries, that lead to the apparent indeterminism, classically as well as in quantum mechanics.
In the Hartle-Hawking proposal, the universe starts out Euclidean, but evolves to a Lorentzian (4d space-time) manifold. This uses a ‘trick’ in quantum field theory known as Wick rotation, which exchanges the real time coordinate with an imaginary one, leading it to behave like a (periodic) spatial coordinate; this is commonly used in calculations in quantum field theory, because the mathematics (path integrals) is much better behaved in the Euclideanized version. Typically, the result is then obtained by Wick rotating again to real time.
Hartle and Hawking have proposed a physical meaning for this process, leading to the universe actually transitioning from a Euclidean to a Lorentzian metric at the Planck time; this does lead to a ‘smoothing out’ of the Big Bang singularity, transforming it merely to an artefact of the coordinatization, but it doesn’t lead to a universe in which ‘everything has already happened’. (That we do live in such a universe, however, is a classical viewpoint in the interpretation of general relativity, in which the notion of absolute simultaneity is lost, and thus, the ‘present plane’ of one of two co-located but differently moving observers may be to the future of the others’, and thus, present and past exist ‘simultaneously’; see the Rietdijk-Putnam argument for a realization with aliens.)
Yes, but as soon as these things come into contact with something else, the entanglement is broken—like in a Bell experiment, once you measure, you don’t have an entangled pair of particles anymore. And besides, even if the entanglement persists, you have a volume whose size is bounded by the distance light could have covered since the entangling event influenced by that event—and the same volume is influenced by something classical happening at that time.
Two issues for me (a) the ‘big bang’ sounds a lot like someone hitting a start button - there was nothing and then there was a bang; how does that work … and (b) the big cheat could be water - what the hell is that stuff?
Layman’s terms: it works because the universe is four-dimensional, time is also a dimension. Instead of the universe being stuff in space, it is stuff in space-time.
I have no idea what you are talking about with water.
Couldn’t agree more. I find so many flaws that, more and more, I’m believing that we are a computer game or something to that effect.
Some people believe in an ALL-POWERFUL GOD, that just happens to need a little help in killing the nonbelievers. It makes no sense.
In the U.S. what type of person do we get to vote into the White House? Lawyers, generals, the wealthy…all of them utterly disconnected from the wants and needs of the average citizen. It makes no sense.
Spectator sports. I had a friend that is a Red Sox baseball fanatic and talked my ear off about them. On one particular day I was already irritated when he mentioned how “We won that game by the skin of our teeth.”
I asked him if he was aware that this sport was all about a bunch of grown men trying to hit a round object with a stick. He said yes.
Then I inquired, “You do know that the Boston Red Sox isn’t owned by Boston, but privately owned by some company out of New York?” He agreed.
I also informed him that most of the players aren’t from Boston, or from Massachusetts, or even from New England, and that some of them aren’t even from this country.
He shrugged his shoulders and said, “I don’t care, I just like baseball.”
In order for such mindless activities to occur, as mentioned above, then surely these people are pre-programmed to believe in such nonsense, beliefs that override logic.
If something scientists did caused a screen to appear in midair labeled “Universe Debugging Console”, that would be hard to explain away, I’d think.
Whether such a simulation is non-falsifiable depends mostly on the intent of its creators, I think. If the creators specifically intend for us to not know then yes, you run into the same problem as with creationism; we aren’t going to find out anything that someone effectively omnipotent doesn’t want us to know. If nothing else they can directly interfere with our thought processes and make us incapable of noticing even the most blatant evidence.
On the other hand if there’s no deliberate effort to hide the truth from us, if they don’t even know or care that we exist then it might be quite possible for us to find some flaws in the simulation. Even ways to alter it for our own purposes. For that matter there’s no reason to assume that the simulators are restrained by any silly “faith is better than reason” rules, and they could contact us themselves at any time - which could be good or bad. It could be anything from “sorry guys, we’re shutting down the simulation due to budget cuts” to, “Uh, sorry we didn’t realize part of the simulation had developed sapience. Here’s your admin privilege passwords and link to the outside network.”
I don’t have a fleshed out answer, but I believe I have a road worth exploring with regard to the likelihood of our (or just you) being in a Matrix-style simulation. There may be a paradox lurking along this road that can be exploited to falsify conscious simulation…but I’ll need your help to flush it out. First, let me list some assumptions and tell me if you see any logical flaws with them:
[ul]
[li]As Matrix-type simulations, we humans will have real higher order consciousness, but only an illusionary universe to exist in. [/li]
[li]If we live in a simulated universe, we cannot interact physically with the real universe, only the simulated one.[/li]
[li]A conscious, self-aware mind, as experienced in the first person, can’t be a simulation, it can only be real. [/li]
[li]Whoever created our real consciousness and simulated universe must themselves have a real higher order consciousness. But, can they themselves live in a simulated universe with no real physical interaction? If we can without breaking physical law, they should be able to as well. [/li]
[li]If it’s possible for something to create us in a simulated universe, then it follows that it’s possible for our creators to have been created in a simulated universe…ad infinitum. But, is this really possible?[/li]
[li]If it’s possible for us to be created in a simulated universe, then it’s possible for our creators to be created in a simulated universe; If it’s not possible for our creators to be created in a simulated universe, then it should not be possible for us have been either.[/li][/ul]
A paradox lies in there somewhere, I think…
Discussion: I can accept that something real (a consciousness being) in a real universe can create something real (another conscious being) in an unreal universe, but I can’t accept that something real in an unreal universe can create something else real in an unreal universe. How could it do so?
I believe a conscious being in a real universe can create a conscious being in a simulated universe because the creator has the ability to physically manipulate matter in his real universe to make something real (another conscious mind) to exist in either the creator’s universe or a simulated universe. If the second generation conscious being was created to exist in the creators’ real universe, he could then create a third generation conscious being in the same real universe…and so on and so forth. No problem breaking any physical laws in that scenario.
But, how can a conscious being in a simulated universe create something real (another conscious being) in any universe (real or synthetic). He can’t! He’s got no real hands to make something real. He can make other imaginary objects in his simulated universe, of course, because inanimate objects in an illusionary universe don’t have to be real. But I submit that a conscious, self-aware mind must be something real that can’t just be constructed mentally by someone else (toaster, yes; mind, no). The only way to deny that is to accept that your own consciousness is not real and I don’t think that’s the premise of the Matrix, nor something you believe.
In other words, this creator/created-in-simulated universe chain breaks down after the first generation. I suppose we could be the only simulated universe generation for this to have occurred, but that doesn’t seem likely, or perhaps even possible. That would make us unique in the universe (all universes?). What happens if we humans succeed in creating a new conscious being? If the above is true, we won’t be able to succeed in a Matrix-type universe, because it will be a paradox if we do. So, who’s going to stop us? I believe we will someday succeed in creating a conscious being, so I don’t believe we can be living in a simulated universe. Sure, a counter argument can be that the conscious being we create can just be an illusion…but, c’mon, that’s a cop-out.
If you accept the premise that a conscious, self-aware being must be a real thing made physically by something else, then we, the creators, will not be able to create it in a simulated universe. And neither can anyone else.
Good discussion, but it doesn’t get around the question of whether we are in a simulation. There could be one real and one simulated, with our creator(s) living in the real one. Like I said previously, this question has been shown, through the history of various thinkers, to be unanswerable.
Whoa, this thread has legs! I thought it died early on. Time to read all these posts. Looks to be very interesting.
Going back a bit, I will comment on this:
Solipsism seems unlikely to me just because I know for a fact I’m not smart enough to create this whole world, especially with all the theories I can only vaguely understand. A simulation, at least, solves that problem.