Was that a typo for
“any proposed theory that augments quantum mechanics with additional parameters needs to be non-local”?
Newton built on generations of experimental data: Galileo’s results that gravity resulted in an acceleration, not a velocity, and many astronomers who measured the distance to the Moon and other celestial objects.
Err, yes, thanks for catching that.
Half Man Half Wit, can I just ask if I’ve got this right -
The Bell’s Inequality experiments say local realistic theories can’t account for the data, so we have
standard QM = local & non-realistic
de Broglie-Bohm = non-local & realistic
And maybe the superdeterminism loophole, saying the experiments aren’t valid because the sampling isn’t random, the universe is a conspiracy in some sense - is this taken seriously?
Phew, I’m just relieved that I haven’t totally lost the plot…
Whether quantum mechanics should be called a local theory or not depends on your definition of ‘local’ and it is not difficult to find examples of QM being described as a non-local theory. It also depends on your view on how QM should be interpreted.
Realism is usually a property assigned to interpretations of QM, rather than to the theory itself, but again it is not like there is a rigorous definition of ‘realism’.
Well, it’s an issue of interpretation, depending on what exactly you take ‘standard QM’ to be. Many Worlds interpretations, for example, are both local and realistic: they get around Bell’s theorem, because it assumes that measurements have definite outcomes, but in MWI, every outcome occurs.
On probably most takes on ‘standard QM’, however, it’s local according to most definitions of locality—you can’t send superluminal signals, there’s no action at a distance, and the local description is independent of whatever happens on distant systems.
As for Bohmian mechanics, the issue is far more clear: it’s explicitly nonlocal—the quantum potential, which tells the particles how to move, depends on the coordinates of all particles simultaneously, so whatever happens to one may instantaneously influence all of the others.
Regarding realism, most commonly, particle positions are definite, and hence, one can treat them realistically; however, other properties, such as spin, don’t have independent reality—rather, whether a spin detector, say in a Stern-Gerlach experiment, clicks, is determined by the position of the particle within the wave packet.
The superdeterminism issue is a little more thorny again. Few people take it seriously, on the grounds that it is difficult to see how one even would do science in such a case: after all, we never observe ‘how the world really is’, but merely, what we are constrained to observe by the superdeterministic rules. Furthermore, it has less explanatory capacity: quantum mechanics predicts the value of Bell inequality violations, and only a violation consistent with this prediction is observed. E.g., in the case of the CHSH-inequality, QM yields the value 2.83 > 2, while a superdeterministic theory could just as well yield values up to 4. Thus, why we only see violations consistent with the quantum bound remains unexplained.
However, a prominent exception to this is Nobel laureate Gerard 't Hooft, who proposes an explanation of QM in terms of a cellular automaton model. It hasn’t found much resonance within the community, though.
Not me, I prefer thinking outside the box.
One sense in which one can consider even standard QM to be nonlocal is that even if you have full information about all of a system’s subsystems, it doesn’t follow that you have full information about the total system—there’s additional information in the correlations that doesn’t reduce to information about the system’s parts.
Think inside the box, because it’s bigger on the inside.
Johannes Kepler started with meticulous data on Mars’ path through the sky and, in an amazing feat without computers or sophisticated math, was able to derive Kepler’s (empirical) Laws of Planetary Motion. The “proof” of Newton’s laws of motion and gravitation was that it explained Kepler’s empirical Laws. There were a few scientists who posited an inverse-square law for celestial gravitation independently of and shortly before Newton did. But they were all reacting to Kepler’s empirical discovery (in particular, that orbits were ellipses).
But this doesn’t fully answer your question. Might Newton have developed the same laws in the pre-Keplerian world when orbits were thought to be circular? I don’t know.
By the way, I think that the three-photon GHZ experiment leads to a simpler example of quantum weirdness than Bell’s Theorem. I mentioned it in an earlier thread:
I don’t have time to read through all of that right now. Does it allow for the possibility that the “hidden variables” might be more complicated than 6 bits?
No one is opposed to the typo in the thread title?
The basic idea is the same as the version of the EPR paradox usually treated in Bell’s Theorem.
Bell’s result is that a probability which must be at least 0.333 if common-sense causality applies, is instead only 0.25.
The GHZ result is that a probability which must be 1 if common-sense causality applies, is instead 0.
The GHZ experimental setup is more complicated than Bell’s. But the paradoxical result is clearer and more “satisfying.”
There’s a no zero probability it’s not a typo.
=====
Slight hijack here … SHOULD there be opposition to quanetum mechanics?
I asked only because a lot of lay explanations of entanglement-related weirdness do assume that hidden variables must be only one bit each, but it’s easy to come up with hidden variable models which are slightly more complicated, and which avoid those overly-simplistic arguments.
Perhaps my referring to the 6 potential binary observations as 6 “hidden” bits was the source of confusion. Reread my brief summary with that substitution.
Both GHZ and the usual Bell’s experiment involve the orientation of a mirror; treating the polarization result as two “hidden” bits is tied to the fact that the experimenter uses just two orientations, no? In that sense, I think the treatments are equivalent.
On the other hand, if you want to posit a more complex “hidden” system, e.g. 137 bits instead of 6 bits. what difference would it make? The six bits I described would still be derivable from the 137 bits; and the “paradoxes” would apply with the same argumentative authority.
TL;DR: Your objection, if any, applies equally to the usual Bell’s Theorem set-up.
… And of course the whole discussion of a local hidden-variable system is counterfactual. The point of the E-P-R paradox, and its variations including the GHZ experiment, is that such systems do not describe reality.
septimus: Is it really that certain? There are some decent minds who still think that a hidden-variables model might work. That isn’t as foolish as, say, creationism or flat-earthism, is it? Even some of the posts in this thread seem to accept hidden variables as an abstract possibility, although far from a favored one.