Is it really true that physicists can't agree on fundamental questions of quantum mechanics?

Reiteration is not an acceptable philosophical (nor scientific) mode, in Western philosophy at least.

So you said.:rolleyes:

Did you disagree with the point I made, or only how I made it?

Well, it’s kind of moot to debate such historical contingencies; I’m not sure it’s all that likely people would have moved in the ‘right’ direction without SR’s guidance at all, but as I said, it’s possible in principle. Another thing I think we’d be missing in your scenario is (most of) black hole theory, though, which is responsible for what most would hold to be the major theoretic advances of the past couple of decades.

Those attempts I know of work mostly by doing violence of one form or another to the ‘spirit’ of Bohm theory—introducing explicitly stochastic dynamics, for instance—, and I think are clearly only guided by a desire to arrive at quantum field theory. But again, I’m willing to concede that it’s possible in principle, but as with GR, it would necessitate to some extent going against the grain of one’s interpretational principles, rather than being led by it.

I’m admittedly out of my depth here so be gentle.

I have this probably half baked notion that one’s interpretation of QM does make a difference inasmuch as it relates to choosing between locality and counterfactual definiteness.

To be clear, I understand the latter to mean the concept that reality has an objective existence apart from our observation of it. So that when you take a sample of electron positions occupied around a hydrogen nucleus, you can assume that the rest of the time, a hydrogen’s electron is one of the other observed positions and not partying somewhere in the crab nebula.

Since I don’t think anyone disputes the basic scientific validity of QM, and since Bell’s theorem states that you have to choose between locality and CFD, there are only certain interpretations of QM that allow it to peacefully coexist with relativity.

Is that correct or am I completely stoned?

Basically you’re right: Bell’s theorem forbids a theory that both obeys locality (i.e. in which events only depend on other events within their past lightcone) and in which all physically observable quantities have definite values at all time to reproduce the predictions of quantum mechanics, so you have to give up at least one (but in practice, in most interpretations, both at least would have to be modified).

An example that gives up locality, but in which at least one special observable (position) is definite at all times, is Bohmian mechanics; the various modal interpretations are often claimed to preserve locality while giving up (or modifying) the notion of reality, i.e. of what it means for physical quantities to have a definite value. The the many- (or splitting-)worlds interpretation is also local, but physical quantities are only definite in a given branch, taking on all possible values across the totality of branches (many worlds is also part of the reason I am hesitant to talk about counterfactual definiteness: it’s often identified with realism—the fact that physical quantities have definite values—, but many worlds obviously violates counterfactual definiteness while keeping realism, at least for a single branch).

Problems with reality only arise if your interpretation incorporates nonlocality by means of an explicit ‘action at a distance’, i.e. by having what you do over here instantaneously have influence what happens over there. This is the reason, basically, relativity is a problem for Bohm theory, where you have indeed nonlocal action, mediated by the ‘quantum potential’. In such a case, one has to be careful to implement the so-called non-signalling principle: roughly, what you do over here cannot change the probability distributions of what happens over there, as otherwise, you could transmit information superluminally.

There’s another, more subtle point regarding realism and counterfactual definiteness: another famous no-go theorem by Simon Kochen and Ernst Specker prohibits theories whose observable quantities possess values independent of other, jointly observed quantities to reproduce the predictions of quantum mechanics.

To understand this, take a set of observables which can be jointly observed perfectly (i.e. unlike position and momentum, which obey an uncertainty principle). The interpretation of this is that the measurements don’t disturb one another, and hence are compatible: if A and B are compatible measurements, observing A does not change the value of B, and vice versa (in particular, in any sequence of measurements composed only of those two, ideally all further measurements of each will reproduce the first value observed). So one would be tempted to speak of a definite value for A, independently of B.

But quantum mechanics forbids such speech. Take another observable that is compatible with A, say C; then, the statement that measurement of A would yield the same value whether it is measured jointly with C or with B can be shown to be in contradiction with quantum mechanics—this is the Kochen-Specker theorem. That despite the fact that we know that measuring C or B does not change the value of A!

Since C or B forms a context for the observation of A, one often talks about the contextuality of quantum theory in this respect. It’s a property of QM that is much unlike anything we know classically: think of some object for which you would find a different shape, if you were to observe it jointly with its color, than if you were to observe it together with its weight!

In fact, this theorem can in some ways be seen as a generalization of Bell’s: locality is really a red herring, it’s the fact that your measurements are undertaken in different contexts that underlies the violation of Bell’s inequality. In particular, you don’t actually need any entanglement or other quantum weirdness to manifest the non-classicality of quantum mechanics: any quantum state can be shown to exhibit contextuality.

Mostly I really hate wikipedia on this subject, but every once in a while . . .

So I’m probably mistaken, but it seems more like it’s drawing the noose around the hidden variables argument more than anything else.

As they explain previous to that, Einstein et al. had a) assumed observable to be commutative which turns out to be a contradiction (i’m mostly parroting here) and b) required reality to be non-contextual.

OK, I see you were focusing on ‘b’ and that IS a bit slippery, but if non-CFD is saying you can’t turn your back on an electron and just assume it’s where you think it is, doesn’t that sort of subsume non-contextuality (think i just made that word up).

But getting back to interpretations. I’m not really clear how you finesse the spooky action at a distance thing. But never mind about that, I can look that up.

We also have spooky action across time. I wasn’t sure I’d be able to dig this up, but Evernote came to the rescue. Time-like entanglement. I never really understood this, but my guess is that somehow the experiment (if it’s ever done) would entangle a circit (I guess a josephson junction or something) with the zero point field. You would then read the circuit at a precise time in the future and the circuit would not exist during the time in between.

Or something like that.

Well, there are several different, but interconnected notions at play here:
[ol]
[li]realism: Despite what quantum mechanics says or seems to say, physical quantities have a definite value at all times. These values are provided by hidden variables.[/li][li]locality: The state of a system does not depend on the state of space-like separated systems, or every event only depends on causes in its past light cone. Together, locality and realism comprise Einstein’s desideratum of local realism, motivated by a) the existence of a maximum finite speed c at which causal influences propagate, and b) the probabilistic nature of quantum mechanical predictions, which hint at more fundamental degrees of freedom (ordinarily, probability arises in our description of nature because of our ignorance of these fundamental degrees of freedom).[/li][li]noncontextuality: As I explained above, the value of a physical quantity is taken to be independent of other (compatible) observations made simultaneously (or indeed sequentially); that is, it doesn’t matter if I measure A and B or A and C, if the measurement of neither B nor C disturbs the value of A.[/li][li]counterfactual definiteness: It is permissible to make counterfactual statements of the form ‘had experiment x been performed, the value y would have been observed’.[/li][/ol]
Bell’s theorem asserts the incompatibility of (1) and (2), the Kochen-Specker theorem states that no theory fulfilling (1) and (3) can reproduce QM. In fact, Bell thought that (3) was too strong a requirement, and thus settled for locality in which the non-disturbance of measurements is guaranteed by spatial distance. (4) is a bit more difficult: even in the case of (1) is true, it might not hold, as it could be that the universe works according to superdeterministic rules, such that it is never possible to perform another measurement other than the one that actually was performed. So (1) would have to be augmented at minimum with a ‘free will’ assumption to guarantee (4); but obviously, if (1) doesn’t hold, neither does (4). To the extent that no-go theorems chip away at (1), they thus also lessen the likelihood of (4); in fact, a common tenet in instrumentalist approaches to quantum theory is Asher Peres’ exclamation that ‘unperformed experiments have no results’.

A lot of this goes back to Einstein, Podolsky, and Rosen, whose criterion for an ‘element of reality’ is:

[QUOTE=EPR]
If, without in any way disturbing a system, we can predict with certainty (i.e., with probability equal to unity) the value of a physical quantity, then there exists an element of physical reality corresponding to this physical quantity.

[/QUOTE]

This is clearly a noncontextual notion of reality, because, as I mentioned, measurements of B and C do not disturb A, but nevertheless, trying to assign a value to A without taking into account the measurement context runs into contradictions.

In theory, it’s not different from ‘ordinary’ entanglement if you take special relativity to heart; both space and time are just parts of spacetime on more-or-less equal footing.

More interesting, I think, are so-called Leggett-Garg inequalities: if you have a system that can be in one of two states, and a measurement that decides between these, the assumption that the system always is in either of these states, plus that the measurements are non-invasive (i.e. don’t change the state), again yields a contradiction with quantum mechanical predictions, making this another classic no-go result.

This thread has moved far away from where it was when I last saw it, but I owe a response to this older post:

I agree with everything you say here, including that it is “effectively” ruled out. My only seemingly minor point is that there is a fundamental difference between “rule out” and “arbitrarily strongly effectively rule out”. If Russell presents to me his teapot, I will be the first to laugh him out of the room. But I will not say that the teapot is not there, because I cannot logically draw that conclusion.

(As an aside: the teapot hypothesis is, of course, observationally distinct from the non-teapot hypothesis, but one approximates that away because otherwise the thought experiment isn’t interesting. This is also true of Copenhagen / MW, since they are only observationally equivalent when one makes the usual experimental approximation that the measurement apparatus is classical. As you point out, this is only an approximation, and as soon as you throw that approximation out, you no longer have a case of observational equivalence to discuss.)

My point is just a distant cousin of Hume’s Is-Ought problem. One can have arbitrarily strong and valid reasons to ignore and ridicule any arbitrary model in favor of another equivalent one, but one can never actually get to 100% ruling it out if they are truly observationally identical.

Theoretical prejudice is a central and deep-rooted part of the theoretical enterprise. It is the heart and soul of effective progress. Someone who can look at a particular approach and correctly predict “You know, I don’t think that’s going to get you very far” will bring much more to science than someone who can’t. However, because this is all one can do theoretically to move forward, people sometimes forget that it can still be wrong and does not actually prove anything about how the universe actually works. If Nature decides to put a teapot out there, who am I to argue?

My point is that regardless of the fact that we can never rule out Russel’s teapot with exactly 100% confidence, we nonetheless find ourselves in the position that we must rule it out in practice until countervailing evidence presents itself. This is because of the fact that there are 10^billion alternative theories that must be considered if we are to consider all theories that are not exactly 100% ruled out, and if we do so we will never make any progress at all. I think it is only necessary to point this obvious fact out because of the analogy to some interpretations of quantum mechanics. Some are better than others, despite the fact that most of them cannot be ruled out with exactly 100% confidence. In practice, it is entirely reasonable to “rule some out,” with the understanding that “ruled out” could change to “ruled in” if your bayesian prior changes in light of some new idea.

This is my first encounter with the Kochen-Specker theorem. Could someone give me a concrete example with, say, energy, total angular momentum, and z component of angular momentum? I’m having a hard time visualizing what it’s saying.

The conceptually simplest proof for the KS theorem is given by the Peres-Mermin square.

Basically, take some four-level quantum system—say, two spin-1/2 particles—with observables constructed out of the Pauli matrices X, Y, Z (corresponding to the spin along the x, y, and z-axis) and the identity I. Then construct the following array of obsevables, where for instance ‘XZ’ means ‘jointly measure X on the first, and Z on the second particle’:


_______       __________
|A|B|C|       |ZI|IZ|ZZ|
|a|b|c|   =   |IX|XI|XX|
|α|β|γ|       |ZX|XZ|YY|


Now, the observables in each row, as well as in each column, commute, i.e. they can be measured together without disturbing one another. That means that every observable is part of two different contexts; for instance, A can be measured together with B and C, and also together with a and α.

Then try to find a noncontextual attribution of values to all nine. Since all of these are dichotomic observables, this amounts to the task of distributing the values +1 and -1 among the observables such that the result agrees with the quantum prediction. This, however, can’t be done.

To see this, note that the product of the observables in each row equals I, the (four-dimensional) identity, which, if it is measured, returns determinately the value 1. Thus, in order to obey this constraint, there must be an even number of -1’s distributed around the square, either none or two per row in fact.

Now, the first two columns also yield I if the observables are multiplied; however, the final column yields Ccγ=-I. Thus, in order to satisfy these constraints, an odd number of -1 must be distributed around the square—which is in contradiction with the previous requirement of distributing an even number.

This means that you cannot associate a value to all of the observables simultaneously, without paying attention to other compatible measurements performed jointly, even though those measurements don’t disturb one another’s value.

Note that we have only talked about observables, not states—in fact, the proof works no matter what state the system is in, that is, you need no entanglement to demonstrate the nonclassical nature of a quantum system; in fact, it might even be in the maximally mixed state. That and the fact that you can derive Bell’s theorem from it makes it a much more general, though often overlooked, no-go result.

Maybe I’m just not seeing it, but that looks trivial to me. Trying to find a consistent assignment for all nine at once means, for instance, trying to simultaneously find A and b, which means finding both the Z and X components of angular momentum for the first particle, but of course different components of angular momentum don’t commute.

No, you’re only talking about simultaneous measurements within one row or column, in which all observables commute. That you need all of the observables comes in merely due to the fact that while A is in a context only with B and C or a and α, a for instance is in a context with b, so that you indeed have to talk about all the observables.

Think about it the following way: once you measure A, you have a choice of measuring either B and C or a and α. Assuming that, upon so doing, you merely ‘uncover’ certain pre-set values, you run into the Kochen-Specker contradiction, since the same reasoning must then hold for the case in which you measure first B, and then choose to either measure A and C or b and β, and so on.

Perhaps it’s more clear in a ‘Bell-like’ inequality form (<x> denotes the expectation value):

<ABC> + <abc> + <αβγ> + <Aaα> + <Bbβ> - <Ccγ> <= 4

This must hold for all noncontextual theories. But quantum mechanics predicts a value of 6, since the first four terms are all equal to the identity, while the last one equals its negative. Here, it’s clear that you don’t need to assign simultaneous values to A and b, because they’re never measured together.

Yep it wouldn’t of necessarily of been as pretty as GR (in fact almost certainly not), but up to a point there is only one way sensible way to go to get a Lorentz invariant gravitational theory and that can account for nearly all of our direct observations of GR.

I must admit I don’t know a lot about this, but I know Hrojve Nikolic has formulated Bohmian interpretations of quantum field theory. Nikolic in other placeds has pointed out the interpretational problems of quantum field theory, one of which is that whilst particles are an empirically observed fact and QFT takes a theory of particles as it’s starting point (i.e. quantum mechanics), particles are not necessarily well-defined entities in QFT as there will not always be an observable corresponding to particle number.

Well, not all states in QFT are eigenstates of the number operator, but not all states in QM are eigenstates of the position (or energy, etc.) operator—doesn’t mean that those are not well defined concepts. But yes, the great schism in the interpretation of QFT is probably between those vying for a particle ontology, and those preferring to just start with fields instead (with perhaps the majority coming down on the field side nowadays). But (and I think we’ve discussed this point before) the ‘empirically observed fact’ that particles exist can’t really be a problem for QFT: it certainly predicts those observations. If the fact that we take them to be proof for the existence of particles, and that’s in conflict with QFT, then I guess that’s really an interpretational problem.