Backward Causation in Quantum Physics

I just read a nearly twenty year old book by a philosopher which argues that we can keep local hidden variables in quantum physics so long as we allow for backward causation. The measurement problem also goes away if we allow for backward causation. And, of course, he also argues that we should allow for backward causation–that objections against it don’t work.

I check the author’s website and see he’s still active and still pushing much the same line.

In the book he discusses some conversations he’s had with physicists by mail, and he implies that at least some of them were at least friendly to his ideas. But that is all I can glean about the status of ideas like this in the actual practice of Physics.

My GQ is, are there physicists who buy into the idea that an interpretation of quantum mechanics which allows for backward causation:

  1. …gives us the possibility of local hidden variables and avoids the measurement problem, and
  2. …is useful or plausible or should be taken on, whether because of 1 or for some other reason

?

I should note:

You also get, he suggests, a QM that’s compatible with special relativity (he thinks that any model involving simultaneous collapse, such as those models most QM theorists build around entanglement phenomena, are not compatible with special relativity since they require a privileged point of view).

And I think he might have just said backward causation saves locality, not “local hidden variables,” but I’m not sure now. I’d have to reread the relevant section.

I’m not very good at quantum physics, but I have had some college physics that covered it, and I have a decent grasp of the basics…

Accepting “backward causality” is a darn high price to pay, simply to defend “hidden variables,” for which there isn’t any need anyway.

Hidden Variables is (as I understand it) the idea that there are internal mechanisms that “explain” the random nature of quantum events. For instance, the standard model says that a Uranium atom will decay randomly. It might happen now, or a year from now. All we can really say is that if you have a million Uranium atoms, half of them will decay in a certain definite period of time (the “half life” of Uranium.)

The Hidden Variables idea says that there are little thingamajigs inside the atom that tell it when to decay. A kind of “fuse” that burns down slowly; a little internal “alarm clock” that goes off and make the decay happen.

No one has ever found any evidence for this. The usual QM theories don’t require it in any way. They just say, “It’s random.”

To accept some form of “time travel” – a signal moving backward in time – simply to defend Hidden Variables is a really gross complicating assumption. Hidden Variables alone, as a theory, is unnecessarily complicating, but then to throw away the fundamental principle of causality – that’s certainly gilding the crocodile!

He thinks that the price is considerably lower than people usually suppose–that, in fact, the possibility of backward causation falls out almost automatically from the fact that physics is (with the exception of one particle) time-symmetric. The apparent massive time-assymetry we seem to see is an artifact of the fact that our local universe has low entropy in one direction (the past) and high entropy in the other (the future). This fact gives us the impression that causation can only go one way. (And in our practical macro-realm, this is basically true.) But it’s only an impression, and an inaccurate one.

If almost all physical interactions are time-symmetric (and they are) then when you look at simple micro-physical interactions you might as well say the future state caused the past state rather than the other way around. This doesn’t force you to say it, of course. But if, in certain special cases, you do say it, he argues, you get the results of Quantum Mechanics out of a model that avoids all action at a distance, gets around sticky issues like the measurement problem, and generally takes “god’s dice” out of the picture (though it doesn’t make us privy to the underlying information that gives rise to the “dicelike” appearance of quantum reality. There is backward causation, but not in a way that makes us able to see the future).

I should note that he generally avoids the phrase “backward causation” since he thinks our concept of causation is really only properly applicable at the macroscale (I think!) and the “backward causation” I’ve been discussing is mostly a micro-physical phenomenon (though of course you can magnify it to the macroscale with the right detector setup). I may be failing to do him justice in a way. He uses the phrase “advanced action” instead of “backward causation”. But–my current thinking is that this is a potato/potato issue, that really, what he’s talking about is backward causation even if he’d prefer to avoid the phrase.

His argument hinges on a claim that there has been a presupposition underlying physics for which there is no evidence, namely:

At the microphysical level, temporally forward-looking influences are coherent, and temporally backward-looking influences are not.

To explain what that means, I’ll discuss a similar idea about the macrophysical level. At the macro level, for any particular event, the most usual course of things is for an event’s causes not to be particularly coordinated with each other, while the event’s effects are highly coordinated. Throw a stone into a pool, and the effect–the radiating waves–show a high degree of correlation with each other. Not so much the cause of the process–the stone dropping into the pool doesn’t show much coordination with anything else.

At the macro level, if you watch a film in which causes are coordinated and effects are not, you soon realize you’re watching a film played backwards. What you’re seeing is physically possible, but appears bizarrely improbable, becuase we know that at the macro level, coordination of influence goes forward in time, not backward in time.

Price argues that we have generally assumed the same holds at the micro level as well. But, he argues, there’s literally no evidence that this is so. Another GQ I have then (which I’m not sure will be answered since it’s buried here!) is whether he’s right that there’s no direct evidence that this principle holds at the micro level.

This sounds like Cramer’s ‘Transactional Interpretation’;

The transactional interpretation doesn’t have hidden variables, and features a probabilistic wave-function collapse, though, IIRC. Similiarly, the other time-symmetric framework I’m (somewhat) familiar with, the Aharonov-Vaidman two-state-vector formalism, isn’t a HV theory either.

As to the specific claim, that you can get quantum (Bell) correlations with backwards causation without violating local realism, I don’t have any trouble believing it – basically, you should be able to prepare any kind of correlation that way, since you can essentially ‘cheat’ and look at the future outcome, and modify the present outcomes accordingly. (I think that causality is actually a – perhaps implicit – assumption in Bell’s theorem, but I’m not perfectly sure about that.)

However, as for the time-symmetry of fundamental laws, there’s at least one exception, which occurs in weak interaction decays, most famously in that of the neutral Kaon. So, empirically, physics isn’t perfectly time-symmetric. And this does have macroscopic consequences: it’s widely expected to be the reason for the domination of matter over antimatter in the universe, thus, if the effect didn’t exist, there ought to have been exactly as much matter as antimatter produced in the big bang, which subsequently would have annihilated, leaving a universe chiefly populated with radiation.

Also, there’s always the danger of causality violations if you admit retrocausality; there’s ways around that, but I to me, this makes things sufficiently undesirable to stick with an ordinary notion of causation.

Incidentally, I just noticed that a recent issue of Philosophy Bites features Huw Price talking about his views, if you’re interested, you can find it here; I haven’t had the chance to listen to it yet, though.

I’m probably just talking to myself here, but another issue I just thought of is how, actually, retrocausality is ‘better’ than non-locality – an even A can receive influence from an event B if B retrocausally influences an event C in the mutual past lightcone of both A and B, which then in turn influences A via ordinary causation; or A can influence B by influencing an event D in both A and B’s future lightcone, which then retrocausally influences B. So why not allow direct nonlocal influence between A and B?

In fact, I’d expect that in order to get Bell correlation between, say, two photons at A and B, something like the above would have to happen, where measurement at B influences the preparation of the state at C, where C is the source of both photons, in order to insure the right outcome at A…

I think he likes it better for reasons of parsimony–backward influence at least gives us a model for influence that we can assimilate to models of influence we already have (just remove the temporally assymetric restriction). He talks a lot about how Einstein hated nonlocality and Bell wasn’t a fan and only went for it because he thought it was a forced conclusion, and Price seems to present his idea as a resolution of that problem (he seems to concieve it as a problem anyway).

The advantages are all conceptual, in that the backward-influence model doesn’t give you any predictions different than any other model. So what makes for a “conceptual” advantage? (Either an interesting question or an empty question…)

Another conceptual advantage is that the backward influence model obviates any need to solve a “measurement problem.” Once again it’s a case of preferring parsimony in the sense of preferring a model which posits phenomena for which there are clear analogues in other realms. In this case, instead of there being a mystery as to just what constitutes a “measurement” and how such a thing can have the influences it does in the Copenhagen (and some other) interpretations of QM, we now can just treat measurement intutively as we always have–as simply an epistemological change, a change in the amount and kind of knowledge we have of an independently existing object of measurement.

It has the practical advantage over many-worlds of preventing any more philosophically inclined person from climbing into a Schroedinger machine on the belief that he’s destined to experience one of the sets of branching universes in which he survives an arbitrary number of rounds in the machine… :stuck_out_tongue:

I think I recall that he summarizes a response to the question (why prefer any model, much less this one, if they all give the same predictions) toward the end of the book. I’ll take a look.

(One disadvantage I’ve discovered concerning the Kindle–much harder to simply browse through than a physical book would be…)

Well, it’s probably largely a matter of taste – to me, it doesn’t seem that much of a difference whether A and B influence one another directly (no matter what exactly that’s supposed to mean…) or only via some intermediary in either their mutual causal past or future. In both cases, spatially separated events influence one another. And the measurement problem is solved in any hidden variables theory, as the measurement simply reveals the value of the hidden variable, as in any classical theory. (I’m not sure why retrocausal influences would be needed here, incidentally – other than in setting the values of the HVs.)

I think the problem I have with retrocausality is that either it permits sending information into the past – but then, one would have to contend with all the classical time-travel paradoxes, which I think is unattractive. Or, it doesn’t – but then, how is it ‘retrocausality’ in any objective way? You can describe the same physical facts without appealing to the concept…

Nevermind that I think there’s lots about time in general, and causality specifically, that we still don’t understand properly (or perhaps don’t think about in the right way).

I’m reading attentively! I’m only getting a tiny fraction of what’s going on here, alas. The thread QUICKLY surpassed anything I ever learned in undergrad physics!

I was taught that “hidden variables” was dead as phlogiston. Is it still around and kicking?

It seems to me that any sensible definition of “local” in itself already implies a lack of backwards causality. Or to put it another way, backwards causality is itself inherently nonlocal.

EDIT: And hidden variables aren’t dead per se. We know what the math of quantum mechanics says, but how that math is interpreted is a philosophical question, not a scientific one. Bell proved that any interpretation consistent with the math must do away with some feature or another that we’d consider intuitive, but that doesn’t say which of those intuitive features we have to toss out. One possible thing you can toss out is determinism, in which case you don’t have hidden variables. Alternately, though, you can throw out locality, and retain hidden variables as long as they’re nonlocal hidden variables. There’s no way to tell the difference, and thus most physicists nowadays don’t bother worrying about the interpretations at all.

How would the behavior entangled particles be explained via non-reality (lack of counterfactual definiteness) as opposed to non-locality?

Particles A and B are entangled, in a superposition {UP, DOWN}.

If by non-locality: there is some rule that particles A and B are following, such that if A is measured to be UP, then B will be measured to be DOWN. Even if we don’t make a measurement, we know the rule, and can say definitively what would have happened. It turns out that ultimately in order for this to work, for us to be making true statements about things which have not been measured, particle A must communicate instantaneously with particle B to tell it what to do, which violates relativity.

If by non-reality: an example would be the many-worlds interpretation. Here there is no definite attribution you can make to the state of particles A and B before measurement, since they no longer represent the state of a particle in a single definite reality. There is some universe where A will be be UP and B will be DOWN, and another universe in which the opposite happens. There is no need to communicate faster than light, but there is also no single, definite state to which we can assign reality before a measurement takes place.

Can you please explain this statement?

In the MWI it makes no sense to claim definite knowledge of the state of a particle that has not been measured, because until measurement you don’t even know what universe you will find yourself in. In one universe the particle might be UP, in another it might be DOWN, but until you make a measurement you don’t know which universe you are in, and therefore you can make no claim on what the state of a particle is. All you know is what the schrodinger equation tells you, which in this case would be the density of universes in which the particle is UP or DOWN, so that you can make a statistical prediction about what universe you will find yourself after you make a measurement.

Obviously, this is more a question of opinion than of fact, but, in general, is the Many Worlds Interpretation gaining support among physicists, or is it still pretty much a minority view? I read about it more often as time goes by, which makes me think it may be “catching on.”

(In the thread on science as opinion, I mentioned my old cosmology prof, who told about how astronomy conferences used to hold votes on whether people thought Cygnus X-1 was a black hole or not. As time went by, more and more people thought it was. This obviously ain’t science, but it is interesting to see the way scientific opinion sways.)

I don’t really see how that answers the question and I don’t see what the probabilities have to do with it either.

You know that one will be up and one will be down. It doesn’t matter which or when. The critical point is that as soon as one is measured, the other assumes the opposite value - instantaneously.

If you mean that each particle’s value is predetermined in any given universe, and the probabilities apply between universes for any given event, which I assume you do, that certainly makes more sense, at least to me. Except now the number of universes becomes at least partly a function of how many times you do the experiment.

I don’t mean to be rude, but that seems more than a bit bogus.

Not in the MWI.

Doing an experiment leads to decoherence, which indeed, depending on how you define “number of universes”, causes their number to increase. You may find this “absurd” or “ridiculous”, but I assure you it is not ridiculous if you really understand what is going on.

The point here is that there is a version of “you” in all these different “universes,” and until you make a measurement you don’t know whether your version of “you” is in the universe with the particle being UP or DOWN. Even after you make a measurement and find that the particle is UP, you cannot say that is was UP in your universe until you made the measurement, because before the measurement the version of “you” corresponding to the universe in which the measurement had been made was not yet anthropically selected. Remember, there is a “you” in all these universes, and you don’t know which “you” you are.

Decoherence, consistent histories, and MWI (which are all roughly equivalent) are together the preferred interpretation among theoretical physicists. Since I don’t want to get into a very long debate about the equivalence of the above interpretations, I will also say that MWI specifically seems to be the slightly preferred interpretation. In my experience a significantly smaller fraction of experimental physicists are MWI-believers. I think that comes from the fact that their education on the subject is little more than osmosis from terrible popular attempts to popularize physics by inaccurately dramatizing the “many worlds” in MWI. “Interpretations of QM” is generally not taught in grad school, so in many cases a NOVA special or a Brain Greene book is a physicist’s only exposure to the subject.

All the MWI ultimately is (as well as the decoherence and consistent histories interpretations), is the idea that the Schrodinger equation is correct (bold and wild and absurd, I know). The Copenhagen interpretation, most should agree, is a sad joke, postulating some non-linear and ill-understood collapse-mechanism with no dynamical explanation or relation to the Schrodinger equation, which magically comes into play during a “measurement” (a concept still completely ill-defined in the Copenhagen interpretation) but not at other times. Furthermore, the Copenhagen interpretation leaves out the effect of including the measurement apparatus in the Schrodinger equation, and in this sense is explicitly an incomplete description of reality. The Copenhagen interpretation even lacks both counterfactual-definitiveness and local-realism. It’s just a real disaster, and the founders of QM understood this. The idea is to “shut up and calculate”, and not worry about the “interpretation”, even though they knew the description was ad-hoc. Had the MWI been discovered in the 1920’s, I have no doubt it would have become the canonical interpretation. But it was discovered in the late 1950’s, after everyone had moved on.