Hah! Brilliant phrasing.
There’s lots of confusion around this point, even in the professional literature, down to some people who should know better arguing that Bell inequality violation does entail that quantum mechanics is nonlocal. It shows no such thing, but to demonstrate that, I’ll have to be a bit roundabout.
First of all, all that Bell inequalities, mathematically, are, is basically the following statement: there exists a joint probability distribution such that all observed data (outcome probabilities, correlations, etc.) can be obtained by marginalization from this distribution.
(Marginalization, for those that haven’t heard the term, basically just means that you sum over all the possible outcomes of the observables you aren’t considering—so if you have a probability distribution for two variables A and B which can yield outcomes a and b—say, both equal to plus or minus one—then in order to obtain the probability for A to come up a certain value, you just sum over the possible values of B; i.e. Pr(A = a) = Σ[sub]b[/sub]Pr(A = a, B = b) = Pr(A = a, B = +1) + Pr(A = a, B = -1).)
Every Bell inequality can be derived from the requirement that such a joint probability distribution exists for all variables in the experiment. To illustrate, I’ll put a short derivation of the famous CHSH-Bell inequality in the spoiler box below (so the math-averse can easily skip it ;)).
[spoiler]
Take four variables A, B, C, D yielding values a, b, c, d that can be either +1 or -1. Assume there exists a joint distribution Pr(A, B, C, D) such that all experimental data can be obtained from it by majorization. Define the correlator of two variables <AB> = Σ[sub]ab[/sub]abPr(A = a, B = b) = Pr(A = +1, B = +1) - Pr(A = +1, B = -1) - Pr(A = -1, B = +1) + Pr(A = -1, B = -1). A value of -1 for <AB> indicates perfect anticorrelation—whenever A comes up +1, B comes up -1, and vice versa—, and a value of +1 indicates perfect correlation.
Now consider the quantity
CHSH = <AB> + <CB> + <CD> - <AD>
With the above definition of the correlators inserted, this becomes:
CHSH = Σ[sub]ab[/sub]abPr(A = a, B = b) + Σ[sub]cb[/sub]cbPr(C = c, B = b) +Σ[sub]cd[/sub]cdPr(C = c, D = d) - Σ[sub]ad[/sub]adPr(A = a, D = d)
Now we use the assumption above: every correlator can be obtained by marginalization from the joint probability distribution Pr(A = a, B = b, C = c, D = d), which I’ll call Pr(A, B, C, D) for convenience. Thus, e.g.:
<AB> = Σ[sub]ab[/sub]abPr(A = a, B = b) = Σ[sub]abcd[/sub]abPr(A, B, C, D)
Inserting this above yields:
CHSH = Σ[sub]abcd[/sub]Pr(A, B, C, D)(ab + cb + cd - ad),
since Pr(A, B, C, D) is common to every term in the sum. Now look at the term (ab + cb + cd - ad). We can rewrite it as:
(ab + cb + cd - ad) = a(b - d) + c(b + d)
Since all of a, b, c, and d can take the values +1 or -1, it is easy to see that this term can never be larger than 2—if b and d are both +1, the first term vanishes, and the second is two; if b = +1, d = -1, the second term vanishes, and the first term is two.
Thus, we can find an upper bound to the CHSH expression:
CHSH ≤ 2*Σ[sub]abcd[/sub]Pr(A, B, C, D) = 2,
since the sum over all outcomes must be 1. Hence, we get the CHSH inequality:
CHSH = <AB> + <CB> + <CD> - <AD> ≤ 2,
and all we had to assume is the existence of a joint distribution.[/spoiler]
The kicker is now that in quantum mechanics, such inequalities may be violated—the above CHSH inequality is bounded by the maximum value 2, while in quantum mechanics, we can achieve a maximum value of 2√2, about 2.83.
The question is, of course—what does this mean?
Recall that our sole assumption was the existence of a joint probability distribution. Now, what could happen in order for that assumption to fail? And the answer is, a lot of things—in fact, such a failure is perfectly ordinary. Anytime obtaining the value of one observable—doing a measurement—changes the value of another, for instance, we won’t have a joint distribution. You can model this by considering an urn model where, every time you draw a ball from the urn, the distribution of balls still remaining in the urn is changed—say, somebody adds some balls, or removes some others.
So, such disturbance is one way to violate our assumption. To safeguard against this, one can make use of physical principles, namely, that there is no possibility of disturbance across spacelike separation—that there need to be a speed-of-light signal sent from one place to another, in order for what you do at one place to have any consequence at the other. This is only where physical assumptions enter the picture: in order to give the inequalities empirical content, not in order to derive them (this point is very often confused).
Thus, one possibility to get a violation of Bell inequalities is some form of nonlocal influence.
However, there exists another way: there could simply not be a joint distribution in the first place, period. This is what usually gets paraded as the ‘realism’-assumption: if any observable has a pre-defined value, then there naturally exists a joint distribution. But if that’s not the case—if there is instead just some probability for each observable to manifest a value in an experiment—there is no reason to make this assumption.
Consider for instance the following case: consider a set of balls having three properties—mass, color, and size. You are barred from measuring all three simultaneously, but you can measure every possible couple (this parallels the quantum notion of complementarity—e.g. you can’t both measure position and momentum simultaneously). Each of these properties has two possible values: a ball can be light or heavy, red or green, large or small. Now consider the following distribution:
[ul]
[li]Whenever color and size are measured together, the outcome is either (green, small) or (red, large), with equal probability[/li][li]Whenever size and mass are measured together, the outcome is either (small, heavy) or (large, light), again with equal probability[/li][li]Whenever mass and color are measured together—well, you guessed it: (heavy, red) or (light, green), with probability 50% each[/li][/ul]
So, whenever a ball is chosen, it has probability 1/2 to be green (or red), large (or small), heavy (or light). Thus, you have sensible probabilities for all the measurements you could perform—measurements of single quantities, or measurements of pairs of them. Again, it’s important that you don’t think of the balls as having those properties, but rather, as manifesting them in a measurement context; otherwise, you will get suckered into effectively believing that there must exist a joint distribution.
But now note that if one were to try and observe all three properties simultaneously, we get a contradiction: say we observe the ball to be green; then, it must also be small. Hence, it must also be heavy; but then, it must also be red!
Consequently, there can be no joint probability distribution Pr(M, C, S); but this doesn’t bother us, as we are really prohibited to measure all three properties simultaneously anyway.
So, if you refrain from thinking of the balls in terms of things that have definite properties, and instead think of them as having certain probabilities to manifest properties in a measurement, there exist cases in which no joint probability can be found, and thus, for which no Bell inequalities hold. And that’s how non-realism (really, value inefiniteness) gets you out of trouble here.
Hmm, I don’t really know the hidden variable theory you’re referring to here; do you have a reference?
The two most common hidden variable theories—Nelsonian stochastic mechanics and Bohmian mechanics—both don’t have a need for such a mechanism. In Nelson’s theory, there is simply an irreducible, fundamental source of randomness (spacetime fluctuations), which source the decay. In Bohmian mechanics, all that really is definite is the particle positions.
To see how this explains decay, consider the following analogy: in a decay process, the wave functions over time evolves from an initial ‘undecayed’ state to a superposition of ‘decayed’ and ‘undecayed’, where the ‘decayed’ branch accumulates more probability over time, which the ‘undecayed’ one consequently looses. Take two vats of liquid, one of which is gradually pumped into the other; in the first one, which starts out full, there is a little ball—the hidden variable, signifying that initially, the atom is in the ‘undecayed’ state. At some point, the ball will be sucked into the pump, and transferred to the other vat—the ‘decayed’ state. When this happens depends entirely on where the ball was at the start; hence, nothing but those coordinates needs to be added to get to a deterministic description.
Just to say thanks for the extensive answer, it’s much appreciated - I will try to digest it later today when I can concentrate without distractions.
My own feeling is that hidden variable theories tend to operate on a sort of reverse-Occam’s razor principle, i.e. what are the least elements we can cull from the classical milieu (and you do have to cull some elements as explained by HMHW). Of course other interpretations take a very different approach.
Like all the major interpretations, the major hidden variable theories have their good and their bad points (depending on your point of view).
Thank you, Carl Pham !!!
Your post is the best I’ve ever seen: a short summary of what quantim physics is able (and not able) to explain.–And in simple English that I can understand.
Nope; just stuff I’ve heard denialists spout. Crank.net used to have updates on web sites denying q.m. and relativity. Fun reading!
The vat of liquid idea is too complicated; we’ve all watched how liquid flows out of a container, and it isn’t linear. You need a one-dimensional metaphor, like a fuse burning down, or the same little ball somewhere in a linear pipe of liquid.
In any case, you still have a “mechanism” that’s fairly complicated. The fuse has to “burn,” somehow. The pipe has to “flow” somehow. You can’t just say, “Only one datum is involved: the length of the fuse.” The fuse…and the pipe…have to have a real meaningful data-intensive structure. Any “mechanistic” model of Uranium decay has to have an inner working that contains a huge amount of information.
A random model avoids this problem, but, of course, many people have philosophical objections to it, refusing to believe that things can “just happen.” The ghost of Newtonian determinism is unbanished.
My thoughts exactly.
Oh, and everyone in this thread owes Carl Pham and Half Man money.

Oh, and everyone in this thread owes Carl Pham and Half Man money.
Seconded. It’s superb how much the experts are willing to contribute on here.

The vat of liquid idea is too complicated; we’ve all watched how liquid flows out of a container, and it isn’t linear. You need a one-dimensional metaphor, like a fuse burning down, or the same little ball somewhere in a linear pipe of liquid.
In any case, you still have a “mechanism” that’s fairly complicated. The fuse has to “burn,” somehow. The pipe has to “flow” somehow. You can’t just say, “Only one datum is involved: the length of the fuse.” The fuse…and the pipe…have to have a real meaningful data-intensive structure. Any “mechanistic” model of Uranium decay has to have an inner working that contains a huge amount of information.
But the mechanism—the way the water flows through the pipe—is the same as in vanilla QM. All that the hidden variable theory adds is simply the ball, the particle positions. Basically, the ordinary quantum evolution provides a set of trajectories—the worlds of many worlds theory; Bohmian mechanics ads a dot to one of these worlds that says ‘this is the real one’.

I was watching a PBS documentary a few years ago (NOVA I think). They made the claim that every single experiment conducted to test QM resulted in affirming it, even the crazy loony-tunes stuff. I was certainly impressed …
Now where’s my hover-car?
Can you ever really test QM? Wouldn’t Schrodinger say simply by testing QM you’ve changed the result? I mean, what kind of test could you devise, right? (I tested QM with a cat and wound up with a shoebox full of kittens, but only half were alive. At least, I think half were alive. I was never really certain. )

Can you ever really test QM? Wouldn’t Schrodinger say simply by testing QM you’ve changed the result? I mean, what kind of test could you devise, right? . . .
Correctly measuring the location and the momentum of a particle to certain levels of accuracy would demolish QM as it is currently understood.
The experiment is easy to describe…just, so far, never accomplished.
“Truth” in science is often just “Not getting demolished yet.”

Can you ever really test QM? Wouldn’t Schrodinger say simply by testing QM you’ve changed the result? I mean, what kind of test could you devise, right? (I tested QM with a cat and wound up with a shoebox full of kittens, but only half were alive. At least, I think half were alive. I was never really certain.
)
The link below is an excellent and accessible account of the Bell’s Theorem experiments that shows how the results confirm QM, and makes clear why they are so utterly weird. The hypothetical experiment that he describes is isomorphic to the actual experiments and easier to follow.

Can you ever really test QM? Wouldn’t Schrodinger say simply by testing QM you’ve changed the result? I mean, what kind of test could you devise, right? (I tested QM with a cat and wound up with a shoebox full of kittens, but only half were alive. At least, I think half were alive. I was never really certain.
)
There are countless of very detailed quantitative predictions of quantum mechanics that can be checked in experiments (and many have been, such that one often hears that quantum theory is the best-confirmed theory in existence).
While QM often doesn’t predict a specific value for a given observable, it makes exact predictions for the probability that the measurement yields a certain value, which can be tested by repeating the same experiment a large enough number of times (which you also have to do in classical mechanics, due to unavoidable experimental uncertainties).
Additionally to that, many quantities can be calculated exactly using a quantum mechanical treatment—an early success was the spectrum of the hydrogen atom, for instance.
So far, every experiment has been in perfect (often stunning) agreement with the predictions of QM; of course, that doesn’t mean it is necessarily the final word—finding systematic disagreements would entail the need to look for a new theory. That theory, too, would have to make certain new, quantitative predictions: otherwise, it simply would be useless, scientifically.
Additionally to that, many quantities can be calculated exactly using a quantum mechanical treatment—an early success was the spectrum of the hydrogen atom, for instance.
Which is actually just a special case of “do the experiment many times to check the probabilities”, where the experiment is set up to really easily do a very large number of runs quickly. Any individual photon emitted by a hydrogen atom can be at any energy, but the energies very close to the lines are much more likely than those not near a line. So you emit a photon, and measure its energy, and then emit another one, and measure it, and so on, until you’ve got enough to see the whole spectrum.
That explains why grad students canvass the neighborhoods at night with catnip and nets every so often …
I am both opposed and unopposed to QM.

How do you get that the uncertainty principle violates causality? Would you also claim that it violates causality in the classical, macroscopic systems where it’s observed?
Well, the uncertainty principle makes Laplacian determinism (“An intellect which at a certain moment would know all forces that set nature in motion, and all positions of all items of which nature is composed, if this intellect were also vast enough to submit these data to analysis, it would embrace in a single formula the movements of the greatest bodies of the universe and those of the tiniest atom.”) impossible, and I’ll bet that a lot of scientists in the early days of quantum theory conflated idea that with the idea of causality.

The link below is an excellent and accessible account of the Bell’s Theorem experiments that shows how the results confirm QM, and makes clear why they are so utterly weird. The hypothetical experiment that he describes is isomorphic to the actual experiments and easier to follow.
I haven’t read this in full as of yet, but it starts out boldly claiming that ‘locality is dead’ thanks to quantum mechanics, which is simply not true. Rather, any proposed theory that augments quantum mechanics with additional parameters needs to be local; but we don’t know if any such theory is true. Quantum mechanics itself is perfectly local.

Which is actually just a special case of “do the experiment many times to check the probabilities”, where the experiment is set up to really easily do a very large number of runs quickly. Any individual photon emitted by a hydrogen atom can be at any energy, but the energies very close to the lines are much more likely than those not near a line. So you emit a photon, and measure its energy, and then emit another one, and measure it, and so on, until you’ve got enough to see the whole spectrum.
Yes, you’re right. But the line widths are typically so small (something like 10[sup]-6[/sup] of the mean energy) that it needs fairly high precision experiments to even notice them, so for lots of practical purposes, what you’ll see is just the value QM predicts.

I am both opposed and unopposed to QM.
Not me. I’m firmly ambivalent.

in the entire history of empirical science, this has never happened. All advances in theoretical understanding have been driven by puzzling experiments. We do not think of the revolutionary theories first, then discover the experiments that provoke them.
What about Newton’s discovery of gravity? Didn’t he think of the revolutionary theory first?
And only a century or two later other people did the experiments to prove it?
I thought that Newton simply proclaimed his theory: that the force which determines the trajectory of the planets in orbit was the same as the force which determines the trajectory of a cannonball. To support his theory , he showed the math; but did he use experimental data in any way?