Questions about the Uncertainty Principle

It’s a matter of how you choose to interpret as well. In Bohmian mechanics particles always have exact positions (and momentum for that matter).

It’s important to point out though that the UP is not a limit on our ability to measure, it is a limit on our ability to predict the outcome of measurements and that is true however you choose to interpret QM.

:: off to cram on Bohm ::

I’d love to hear of an experiment that directly proves a particle cannot have a definite position and momentum simultaneously. Because I’ve never seen one, and I don’t believe any is possible.* So far as I know, we accept the foundations of QM (including [x,p] != 0) simply because we have no other theory that explains the world as well as it does. So far as I know, we have no direct proof of its axioms (which is why we call them that).

  • No experiments deriving from Bell’s Inequality, e.g. quantum teleportation experiments, count, because they all rely on the assumption that the particle is described by a probability amplitude, not a probability. Which is to say they assume the existence of a quantum phase right out of the box. This entirely begs the question.

Well, you won’t get one: in Bohmian mechanics, particles do have both a definite position and momentum, at the cost of several other notions we would ordinarily expect from a ‘reasonable’ theory (most notably the instantaneous influence of arbitrarily far separated systems upon one another, but also the dependence of measurement outcomes on the performance of other measurements, and so on).

This is, of course, all the reason we ever have, and ever can have, for believing in any scientific theory.

This isn’t right. Bell inequalities hold for any classical theory, that is, specifically, for any theory in which 1) measurement outcomes are uniquely predetermined, i.e. simultaneous values for all measurable quantities exist, and 2) all influences are local, i.e. there is no superluminal signalling. Those inequalities are experimentally violated by quantum mechanics, meaning that we have to give up one of these assumptions. There are similar no-go theorems, such as the aforementioned Kochen-Specker one, which establishes the (stronger) result that no theory in which measurement outcomes are predetermined and do not depend on what other measurements are simultaneously performed can reproduce the predictions of quantum mechanics.

In a word, these theorems tell you that ‘if you observe such-and-such a value in an experiment, a classical theory can’t explain that’; they’re completely free off assumptions about quantum theory. You can simply do the measurement, and observe the violation of the classical bound.

I might be way off track here, but I have always resolved the uncertainty principle in my brain as a consequence of wave-particle duality. What I mean is this:

Objects at this scale cannot be accurately described as just particles. They behave like waves as well. If an electron was just a dot whizzing around it would be a (relatively) simple matter to determine its mass, position, velocity, momentum and be done. Since however it is a wave, it exists over a range of positions simultaneously. Hence the difficulty in determining both position and momentum simultaneously.

(And no measurement/observation paradoxes involved. I always regarded that as quite a separate issue.)

Tell me if I am incorrect here.

How do we know that? I realize that for all intents and purposes, it’s the same thing, but if we can’t measure it, how can we say that it doesn’t have an exact value for the conjugate?

Good and confused, are you, eh?

Good. That shows you’ve been paying attention. :slight_smile:

The problem with that point of view is that the “old quantum” view of wave-particle duality is not fundamental in quantum mechanics. In particular the idea of the particle as a physical wave doesn’t really exist in QM. The most faithful pov to wave-particle that can fully explain QM is probably the pilot wave of Bohmian mechanics, which guides the trajectory of particles.

Wave-particle duality for heuristic justifications of the uncertainty principle, but actually it comes from the very abstract space that the wavefunction occupies and the relationship between that and observables.

Well, let’s assume that there is at all times a definite value to all observable quantities. In particular, let’s assume that there are four different quantities, A, B, C, and D, which can have the values of either +1 or -1, that are always definite. You could imagine, for instance, A being position: the particle being in the left half-space means a measurement outcome -1, the particle being in the right half-space means a measurement outcome of +1. Equivalently, you could imagine one of these quantities to be momentum, or anything else. Most commonly, however, A-D are thought of as representing a particle’s spin along a particular direction, which can be either +1 or -1.

Now, consider the following quantity:
X = A(B - D) + C(B + D)

It is not hard to see that if A-D have fixed values, then X can at most be two; it suffices to check these four cases:
[ul]
[li]B = D = +1: the first term vanishes, and the second one is equal to 2 if C = +1[/li][li]B = +1, D = -1: the second term vanishes, and the first one is equal to 2 if A = +1[/li][li]B = -1, D = +1: the second term vanishes, and the first one is equal to 2 if A = -1[/li][li]B = D = -1: the first term vanishes, and the second one is equal to 2 if C = -1[/li][/ul]

Slightly rearranged, we get the following:
AB + CB + CD - AD ≤ 2

Note that this inequality must hold for every theory in which all of A-D have a definite value. In particular, since it holds every time the measurement is performed, it also holds in the average, that is, if <AB> denotes ‘measure AB often and then average over the results’, the following holds, as well:

<AB> + <CB> + <CD> - <AD> ≤ 2

Now, we would like to test whether this inequality holds also in the case of quantum mechanics. If we could show that it doesn’t, then we would also, unambiguously, have shown that A-D can’t simultaneously have definite values. But there are some obstacles involved.

First of all, we need to make precise what something like <AB> is supposed to mean in QM. The issue here is that in general, not all quantities can be measured simultaneously to arbitrary exactness: this is, of course, the famous uncertainty principle. Basically, we’ve got two choices: either we can make a sequential measurement of quantities that can’t be simultaneously measured, or we can make a simultaneous measurement of quantities that can be.

The first choice leads to a so-called Leggett-Garg inequality. Note that we have made an additional assumption here: namely, that the value of a measured quantity does not change if we first measure a different one. If this did not hold, then our above derivation no longer applies: in particular, there would be no reason for the D after the measurement of C to have the same value as after a measurement of A. And with that, the whole thing collapses. Thus, we can only rule out the joint assumption of ‘definite values’ and ‘noninvasive measurements’.

The second choice means that whenever we measure something lika <AB>, then both A and B must be quantities that can simultaneously be measured—i.e. for which there exists no uncertainty principle. Then, however, we again must make an additional assumption: namely that measuring, say, A simultaneously with B, yields the same value as measuring A simultaneously with D, i.e. that the value of A is independent of the context in which it is measured. So once again, we can only exclude the joint assumptions of ‘definite values’ and ‘context-independence’ or ‘noncontextuality’. This is the content of the Kochen-Specker theorem.

The Kochen-Specker theorem is, in a sense, a bit misnamed, because it was discovered by J. S. Bell some years before Kochen and Specker, and is thus sometimes also called Bell-Kochen-Specker theorem. Bell wasn’t terribly convinced by the assumption of noncontextuality: he saw no reason that the value of a quantity should not depend on what other measurements occur simultaneously. So, he sought to strengthen the result by positing a stricter requirement, which made the theorem less general, but, he hoped, also more convincing. He found what he looked for in the notion of locality: essentially, he required the measurement of the other quantity to be carried out on an entirely different system—that is, a term like <AB> now means ‘measure A on system 1, and B on system 2, such that no signal could travel from 1 to 2’. In this sense, Bell’s theorem is a more specialized version of the KS one. Here, now, the two assumptions that are being tested are ‘definite values’ and ‘locality’.

Now, all three of these have, in fact, been subject to experimental scrutiny—and every time, quantum mechanics has exceeded the bound derived above. (The theoretical maximum, in quantum mechanics, is in fact 2√2, or roughly 2.83.) What does this mean? Well, the issue is somewhat subtle. On the face of it, it means that you either have to give up the idea of definite values, or the conjunction of noninvasive measurements, noncontextuality, and locality, and that’s sort of the minimal implication of these results. However, as this post is getting quite long already, I’ll leave out the more philosophical details about different kinds of nonlocality, the in-principle measurability of noncontextuality, and so on. A question, perhaps: in what sense does a system have ‘definite values’ for its observables, if at any time, measurements carried out arbitrarily far away could instantaneously change those values?

There’s also the Stern-Gerlach experiment, which doesn’t immediate refute the idea of hidden variables but at least shows that a naive treatment won’t work. There are spin operators S[SUB]z[/SUB] and S[SUB]x[/SUB], which measure spin along the z- and x-axes, respectively. (Exactly what spin is is reasonably complicated— there really is no classical analogue— but mostly irrelevant here. The only thing we really need is that S[SUB]z[/SUB] and S[SUB]x[/SUB] do not commute; in fact, their commutator is proportional to S[SUB]y[/SUB].)

The possible values of spin for an electron are 1/2 or -1/2; those two discrete values are the only possibilities. With an experimental setup, you can detect spin of incoming electrons and split them into two beams, one returning +1/2 and one returning -1/2 for S[SUB]z[/SUB]. If you take the +1/2 beam and run it through another such setup, nothing interesting happens: All the electrons have spin +1/2, and so there is no -1/2 beam from the second detector.

Chain an S[SUB]z[/SUB] and an S[SUB]x[/SUB] detector together, so that you split an incoming beam into an S[SUB]z[/SUB] +1/2 bean and an S[SUB]z[/SUB] -1/2 beam, then pass the +1/2 beam into an S[SUB]x[/SUB] detector. You’ll see two beams, this time corresponding to the +1/2 and -1/2 values for S[SUB]x[/SUB]. Now take that last +1/2 beam and pass it through another S[SUB]z[/SUB] detector. Even though all the electrons in that beam had +1/2 for S[SUB]z[/SUB] when it went through it the first time, now you’ll see two separate +1/2 and -1/2 beams; in fact, there will be equal numbers of electrons in each.

If the electrons coming into the first detector were secretly +1/2 or -1/2 S[SUB]z[/SUB], rather than a superposition of the two, then that value was somehow reset during the process. Instead, the usual resolution is the middle detector immediately forced electrons into the S[SUB]x[/SUB] +1/2 eigenstate, which happens to be an even mixture of S[SUB]z[/SUB] +1/2 and S[SUB]z[/SUB] -1/2; and that’s exactly what you see out of the last detector.