Please explain the 'delayed erasure' form of the quantum erasure experiment

The full experiment and it’s permutations is laid out with great clarity here.

What is of particular interest in light of the Schrodinger thread is the delayed erasure form of the experiment.

To briefly outline and explain for those not familiar. A famous experiment in wave-particle duality involves sending individual photons toward a barrier with 2 slits placed close together. Normally, a beam of light passing through the slits would interfere with itself and create an interference pattern on a screen on the opposite side from the slits.

It turns out that when you just send single photons at the double slit, you also get an interference pattern. The explanation is that due to the wave nature of the photon, each photon it interferes with itself and thus the pattern is still created.

However if one attempts to determine which slit a photon has passed through, by using a polarizing filter or some other means, even if this should not effect the photon’s ability to self-interfere, the pattern will disappear.

This leads to the quantum eraser experiment where 2 beams of entangled photons are used, each with a particular polarization.

At the link, there is an s path and a p path. In the s path, photons will encounter polarizing filters in front of the slits. Each slit will cause a change in polarization that can be detected at the s detector so as to identify which slit the photon entered through.

The p path has a single polarizing filter but of a different kind. The photons produced by the light source are vertically or horizontally polarized. The p filter changes them both to be diagonally polarized so that they cannot be distinguished.

Since the filters used in the s path in each slit require that the polarization of the the incoming photons to be identifiable, i.e., either vertical or horizontal, if they are all diagonal, then they will be indistinguishable.

So when the experiment is first run with the polarizing filter on path p and the filters on the slits on path s, no interference pattern is produced. This is because from path p, we will be able to identify the polarity of each partner photon as having the opposite polarity and from that deduce which slit the photon passed through. Therefore, no pattern is produced.

However when the filter in path p is rotated to give the p photons a diagonal polarization, the polarity of the s photons can no longer be identified and therefore a pattern is now produced.

In the delayed erasure form of the experiment, the filter in path P is place so far from the photon source that none of the p photons past through it until the partner entangled photon has been registered by the detector in path s.

When this is done, setting the p filter to be vertical or diagonal will produce the same results as before even though the s photons can’t have any way of “knowing” which way they’re supposed to behave.

So the question is, how does one have a wave function collapse or decision point split before the criteria necessary for the collapse or split have occurred?

This issue was recently discussed in an article in New Scientist. However, the link is subscription only, and I won’t copy the text here.

They start with Wheeler’s delayed choice experiment (interference experiment where the choice of path is not determined until after the photon has passed the slits). This shows that the results reflect whether you have determined which slit the photon passed through - if you do know, you see a particle, if you don’t, you see a wave.

The extension experiment does delayed choice quantum eraser, but the choice is determined by a quantum event. If you run the experiment without determining the state of the control photon, your results from the detectors show results indicative of both - neither wave (white) nor particle (black), but a mixture of both (grey). If you vary the superposition ratio of the control photon (to 30/70, say), your results also swing to reflect that (less or more grey). When you observe the control photon, you can determine which path the experimental photons took, and your results resolve accordingly.

The conclusion - quantum behaviour is weird. However, it does tend to reject a hidden variable interpretation. The real answer is that the underlying reality is very different from the classical wave/particle duality. All we see are the projections on the wall, and depending on which light source we use, we get different shadows.

[QUOTE=New Scientist article]
The outcomes of the latest experiments simply bear that out. “Particle” and “wave” are concepts we latch on to because they seem to correspond to guises of matter in our familiar, classical world. But attempting to describe true quantum reality with these or any other black-or-white concepts is an enterprise doomed to failure.
[/QUOTE]

Thanks. I subscribe so I’ll take a look at that as I catch up on my reading. :slight_smile: :slight_smile:

But the thread I reference has a very ethereal discourse on the philosophical underpinning of QM and I was hoping to use this as a concrete example which could embody those Platonic ideals.

That is neat. It is like we see the uncertainty principal directly.

The resolution of what is actually going on will be reality shaking. That year there will be no doubt as to who gets the nobel prize.

si_blakely: I just read the NS article. I think I understood but some visual aids should have been pretty much de rigueur I thought. Although maybe they have them in the print edition. I read it online since I’m a few weeks behind on the snail mail. Anyway, i think got the idea, which is basically that there is no duality but rather the whole notion is completely fungible.

Even so, the quantum eraser, delayed eraser experiment is what really bothers me. And that’s not because of the normal quantum weirdness. I got used to that a long time ago. I’m not really sure why, but things like superposition, coherence, etc, never really seemed to violate my sensibilities. But this one does, and partly because I think it might go a bit deeper than just quantum weirdness.

The fact that the photons hitting the quarter wave polarizers on path s can determine whether or not to create an interference pattern before their entangled partners on path p ever hit that polarizer seems to be saying that the s photons know the outcome of the experiment from the beginning. Or to put it differently, their behavior is time independent.

That probably doesn’t sound all that weird to a lot of people, but I find that quite disturbing. I keep trying to see if there’s something obvious that I’ve overlooked, but the way the experiment is explained, I just don’t see anything.

If that’s true, what might be interesting about it is the method by which the entangled photons are created - spontaneous parametric down conversion. I couldn’t find much on this but I believe that like all forms of entanglement, it relies on random vacuum fluctuations. And about a weeks or so ago I referenced an article that talked about the possibility of extracting time-like entanglement directly from the vacuum. This is a different article since the original was from phys.org.

So I was wondering if a certain amount of time independence might not be a basic quantum characteristic. It’s a strange idea I suppose, but in this context, definitely not the strangest. :wink:

I guess I’m missing something. Can’t you then see whether the path-s photons make an interference pattern, and in response re-set the path-p filters the wrong way? (Can you go back in a time machine and kill your grandfather?)

If you accept the notion that cause-effect relationships need not follow the normal “arrow of time,” the behavior is less mysterious. The p filter affects the arriving p photon (the photon has no “sense” of time direction) which then affects its twin, the s photon.

I’m no physicist and certainly cannot explain the details. But it is no secret among physicists that reverse-time causality (or whatever the correct term is) gives a way to model such spookiness.

I’m surprised such a viewpoint is seldom adopted. Yes, reverse-time causality is counter-intuitive. But less so than any other way to escape such quantum spookiness.

BobX: Not sure if serious. Hmmm.

Path p and s are entangled such that the photons in each path are polarized BOTH vertically AND horizontally. However when one of the entangled pair is measured, say the p photon, it will randomly assume one orientation or the other. There is no way to predict which.

Whichever it assumes, the s photon will necessarily assume the opposite orientation.

In the experiment, when the p photon is measured, then the orientation of the s photon is known to be either vertical or horizontal. Depending on which it is, the quarter wave filters will give it a right hand or left hand circular polarization. That will tell us which slit it passed through.

Whenever it is possible to know which slit was used, no interference pattern is produced. However, if the p photon is given a diagonal polarization, then the s photon also has this orientation and it will have the same circular polarization coming out of the quarter wave filters. IOW, you won’t be able to tell which slit was used. In that case, NOW you suddenly get the interference pattern again.

The trick comes when you send the s photons through the quarter wave filters BEFORE you decide if you are going to give the p photons a diagonal polarization or not. The s photons produce the correct results before they could possibly know what they’re “supposed” to do.

At one level of thinking about it, this whole quantum erasure business, and especially the delayed erasure, is just plain inexplicable by any humanly-imaginable logic that Aristotle or anyone ever since could come up with. So that’s one answer to the OP’s original question: Just. Plain. Inexplicable.

Or, to put it as my big brother, the math major, did: It’s all in the mathematics. The math involved is beyond mind-bending for anyone other than a very advanced mathematician and physicist, and provides the “mathematical” explanation of quantum phenomena. My brother’s point was, it’s all just so alien (in light of our normal experience) that there’s just no real macro-world metaphor or analogy that can compare to quantum phenomena. You just gotta do the mind-bending math, and then you just gotta understand it from that, and believe that it works like the math says it does, just because the math says it does.

Then you go out and do some real-life experiments, like these quantum erasure games, and lo and behold, you find that you really get the incomprehensible results that the mind-bending math says you get. And that may just be about as comprehensible as it gets.

Almost no physicsists are on board with it, but there’s a view endorsed by some philosophers of science (Huw Price being one major name) that weird QM phenomena look much less weird if we assume particles’ behavior looks exactly the same forward and backward in time. That assumption (it is argued) turns out to obviate the need for concepts of decoherence or nonlocality. In exchange, you have to take on board the concept of phenomena that can be aptly characterized as moving backward in time, in the sense tha the phenomena have an entropy arrow pointing to the past instead of the future.

The puzzle is figuring out why at the macro scale the entropy arrow always points forward if it just as often points backward on the micro scale. They’ve got things to say about this but I completely forget how the argument went.

Question is, is accepting time working funny better than space working funny?

I am still not understanding what you are saying.

So: can you look at what the s-photons did, and THEN decide to give the p-photons the diagonal polarization, or not, depending on whichever is the opposite of what the s-photons told you???

BobX: I’m not sure that would be any different. In both cases, the photon in the p path doesn’t “know” what kind of polarizer it’s going to hit until it actually does.

Even so, it might be an interesting question to investigate any time dependent aspect of the experiment. I suppose if there were a way that you could change the polarizer on a nanosecond or femtosecond basis and you got a different result, that would be pretty interesting.

They argue it’s not a particularly funny way for time to work–it’d be more surprising, they argue, if there were an arrow of time at the microscopic level. That’d be something puzzling to explain. If there were no arrow of time at the microscopic level, there’d be nothing to explain.

I think in order to resolve such seeming paradoxes, it is often useful to simply look at quantum mechanics as a generalized theory of probability. In classical probability theory, the probability of a particle hitting the screen at some point is equal to the probability of getting there via slit 1, plus the probability of getting there via slit two 1:

p(hit screen at p) = p(arrive at p via 1) + p(arrive at p via 2)

This is the law of total probability. However, in quantum mechanics, one does not talk about probabilities directly, but rather about probability amplitudes, which are (complex) numbers such that their squares are equal to probabilities (these numbers are essentially the wavefunction’s values). Thus, the probability of a particle arriving at point p is the square of its amplitude to do so:

p(hit screen at p) = |a(hit screen at p)|[sup]2[/sup]

But in order to derive this amplitude, one has to add the amplitudes of getting there via slit 1 and getting there via slit 2:

p(hit screen at p) = |a(hit screen at p)|[sup]2[/sup] = |p(arrive at p via 1) + p(arrive at p via 2)|[sup]2[/sup]

This means that in quantum mechanics, the law of total probability does not hold, since |a[sub]1[/sub]|[sup]2[/sup] + |a[sub]2[/sub]|[sup]2[/sup] is not equal to |a[sub]1[/sub] + a[sub]2[/sub]|[sup]2[/sup]. But the latter sum can have any value from 0 to 1, unlike the former, which, if either term is different from zero, will always be greater than zero. In fact, one sees that

|a[sub]1[/sub] + a[sub]2[/sub]|[sup]2[/sup] = |a[sub]1[/sub]|[sup]2[/sup] + |a[sub]2[/sub]|[sup]2[/sup] + 2a[sub]1[/sub]a[sub]2[/sub][sup]*[/sup],

where the last term is responsible for the interference. Here’s a little illustration of this.

But now it’s clear why the interference pattern disappears once we obtain information about which path the particle followed: the probability of finding the particle at slit 1 is just |a[sub]1[/sub]|[sup]2[/sup], which is just the classical probability. Thus, having made the observation, what we’re left with are just the classical probabilities without interference term, obeying the law of total probability.

The crux now is that it doesn’t matter when we obtain that information: as long as we do at some point in time, we have to describe the probability distribution of the particles arriving at the screen differently than we would if we hadn’t had that information. This is actually not different from classical probability theory: there, too, if you retroactively obtain information changing (via Bayesian updating) the probability distribution according to which you described some statistical experiment in the past, you can find new patterns in old data.

And ultimately, that’s all there is to it: without finding out which of the detector the particles were detected at, you can’t recover the interference pattern; if you just monitor the detector that functions as the ‘screen’, you will only see a washed-out distribution. It’s only if you acquire the additional information that you can, so to speak, separate the wheat from the chaff, look at only the appropriate events (detections), and only then recover the interference pattern.

This also makes it clear why you can’t use this kind of effect to send messages ‘into the past’: you would have to find some way to let the observer in the past know the detections you made in his future. But obviously, such a way already presupposes a means for communicating with the past!

The vacuum fluctuations are responsible for converting one photon that is incident onto the BBO crystal into two photons with half the original’s frequency; these are then entangled simply because they share a common source. Vacuum fluctuations have nothing to do with entanglement in general. All quantum systems that have interacted in the past are entangled, simply because there is no way to fully describe each one on its own (they are described by a wavefunction that can’t be decomposed into the product of two wavefunctions). Vacuum fluctuations simply are the observation that, unlike in classical mechanics, a quantum oscillator has a lowest level of energy that isn’t zero; the mystery you seem to attach to them just isn’t there.

Just a minor nitpick, but most physicists would, in fact, be on board with the idea that quantum phenomena (with just a few exceptions) look the same forward and backward in time. However, few agree that this entails the possibility of retrocausal influences.

HMHW, I don’t mean to be rude, but I really get the feeling that you’re jerking my chain. The exposition of the experiment was published in Physical Review A and it was clearly presented as having no apparent explanation.

Besides that, you are completely ignoring the effect of the quarter wave filters. According to your analysis, there should be no difference in the experiment with or without them.

And finally, how did you ever manage to work Bayesian decision theory into this? That’s completely irrelevant, at least in the form I’m familiar with.

I didn’t talk about decision theory, merely about updating: if you have some prior probability distribution, new information will cause you to update it to a different posterior distribution; I don’t think what’s happening here is really that much different. I confess I didn’t look at the details of the paper, but if it’s not terribly different from the standard delayed choice eraser experiments, I don’t think it’s all that much more mysterious.

I skimmed a couple of descriptions and it’s quite a bit different. In the classical experiment, a photon passes through the slits and THEN the entangled photons are created - a different pair depending on which slit it passed through.

This doesn’t make for a conceptual difference though. Ultimately, what matters is that the detection of one of the photons gives you the ‘which-path’ information, while the other builds up the interference pattern (or not). What matters is, if you have which-path information, you don’t see interference, and if you don’t, you do; and the reason for that is simply, as I have explained above, that you must add either amplitudes or probabilities according to the availability of this information.

So how do the quarter wave filter affect the probabilities, unless you’re saying in essence that they’re really superfluous.