In your example only Bob knows what the state is. (Remember, the actual quantum pairing is irrelevant. It can be any twinned quantum value.) Alice doesn’t know anything. So there is no information communicated.
But what if Alice then measures her particle? Then she knows and Bob knows. But so what? They have both made a measurement on a random event. They can log this information to be used at a future time if they get together, but individually they have nothing more than a log of a series of random events. Of what use is that? And they can only convey their logs to one another at the speed of light.
It would be information if they could compel the change to a specified result. But they can’t. It’s a strictly random process, the only true randomness as we currently understand it. Neither Alice nor Bob can do anything but measure. They can’t actively change anything.
No matter what Bob or Alice do, or in what order, all they have is a list of random events. That is not information, and therefore nothing has been communicated.
That’s my understanding of it, anyway, and why people make so much of it. My reading of what Stranger wrote in that paragraph is that he agrees with that interpretation. Maybe if he comes back, he’ll correct me, but that’s the pop sci explanation of why your scenario can’t work.
You could probably build up a sufficiently-rigorous interpretation based on that. If you did, it’d be one example of a nonlocal hidden variable model, which is one possible resolution to the puzzle. Of course, the nonlocality seems rather screwy, but then, every resolution of the EPR paradox seems rather screwy in some way or another, and it’s pretty much a matter of personal taste which one seems least screwy.
Ah, maybe that’s the sticking point. I’m assuming that Alice does have a way to change the state. Otherwise, it’s pretty obvious that there’s no information being communicated. It’s like listening to static on a radio: someone has to be changing a state or it’s just random noise.
No, there is no way to evaluate this until you and your buddy actually get together and compare notes. And this leads into an even more problematic issue; that is, by measuring (and therefore interacting with) one of a part of entangled particles, you contaminate the connection such that the states of the particles are not strictly determined by each other but by the other parts of the system that they are each connected to. Although the ideal in thought experiments is to consider an isolated system, the reality is that real interactions always occur in interconnected systems which include the observer and all of his or her equipment, and are thus too complex to model in toto; thus, you have to find a way to determine the state of a restricted system that correlates to the connection between two particles rather than measuring some property (such as spin) of the particles directly.
That is essentially correct. There is information (in the form of a consistent history of complementary states) being conveyed between the two points, but without knowing the local measurements of each state one cannot “decode” any of that information. Furthermore, by forcing one particle into a particular sequence of states in order to relay information, or by attempting to measure those states on the other side, you influence the entangled system, so as a practical means of conveying a signal it isn’t useful, at least in that simplified form. But there is no question that there is some kind of oddness occurring with the phenomenon of entanglement that is at odds with the conventional view of causality as predicated by special relativity. Either there is a “spooky” non-local connection, or some kind of hidden predetermined self-consistent history, or some other high weirdness that provides for an apparent non-local link between two distant particles.
septimus’ “very simple non-paradoxical explanation” sounds like the introduction to the transactional interpretation, in which the collapse (or interaction) of the quantum wavefunction of the system of entangled particles is atemporal, i.e. it occurs not at any particular point in time but at the relative points of interest in space-time that the point is measured. This doesn’t really fix the perceived problem, though; you still have some kind of non-causal link between two points that violates the pre-conditions of special relativity, i.e. the instantaneous conveyance of information between to points separated by a space-like interval. Note that this is an assumption (albeit a pretty fundamental one) for relativity, not a conclusion, and assumes that space-time is accurately represented by a smooth and continuous manifold.
It is possible within the framework of relativity to have points in space connected by multiple or even infinite paths, but then that makes the math extremely ugly and the theory basically unworkable in the general sense. Indeed, this is essentially the approach taken by quantum field theories save that they reduce the “infinite number of paths” back down to a manageable quantity via renormalization. With large systems of particles for which the composite probability waveform distribution is significantly smaller than the system itself, such “multiple paths” can be neglected (as can all other quantum behavior).
The problem with any theory that relies on some kind of “retrograde causality” is that this doesn’t appear to be the way the world works. Although the understanding of thermodynamics on the quantum level is still rudimentary, it does appear from quantum electrodynamics that, despite that the probability waveform of an individual electron is time independent (at least, for the three dimensional Schrödinger equation), any “sufficiently large” collection of waveforms has a preferred direction of evolution and can’t be separated back into their individual formulations. As such, reverse causality requires some additional conditions about the evolution of a system to permit the conveyance of information backward, which basically equate to some kind of consistent histories or non-local hidden variables approach. To make a chemical analogy, you can estimate the number of reactants in a combustion process by measuring the resulting products, but only if you know at least what the initial reactant types were; the existence of H[sub]2[/sub]O as a product tells you that some reactants containing hydrogen and oxygen were involved, but not whether they are 2*H[sub]2[/sub] + O[sub]2[/sub], or H[sub]2[/sub]O[sub]2[/sub] + H[sub]2[/sub], or whatever. You need more information that you can have by measuring the system state at one point in order to determine past history.
Let’s say that You’re in Galaxy A, and I’m in Galaxy B. We both have the entangled particle. If you “fiddle” with the spin in Galaxy A, wouldn’t I know in Galaxy B that something happened? Isn’t this instantaneous transmission of information? It seems that it doesn’t matter if the spin is up or down, or whatever else. The change in status of the particle, no matter how, seems to me like an exchange of information.
BTW, this is definitely a layman’s question!
Jake
No. Think of it as though you have two random number generators which are guaranteed to produce the same number, but neither of you can tell anything about when the other person looks at theirs; you just know that yours will match up with theirs if you ever meet back up and compare notes.
(For a more detailed story, including what makes the situation different from just having sealed envelopes with copies of the same letter in them, try the television metaphor here)
No, because there is no way to monitor the status of an entangled particle without forcing it into a specific state. In other words, there is no way to tell that a particle is in an “undetermined” state. So basically when you measure the state of the particle for the first time there is no way to determine whether you are determining the particles state, or whether someone else has already determined it by measurement.
We sentient Earthlings are definitely part of a thermodynamic system (or subsystem) with a clear direction of causality, but is it clear that the direction applies universally or at the particle level?
The collections of waveforms we observe share the same causality direction as us, but that’s because those are the only collections we can observe ! Simple example: we observe stars, and know they share our causality direction. But suppose there was a time-reversed star in range of our telescopes. We wouldn’t be able to detect its light, which in our time sense would appear as uncorrelated photons which, moreover, if intercepted for detection, wouldn’t have existed in the first place!
Question for astrophysicists: How would a retrograde-causality star appear? As “dark matter”?
Disclaimer: I understand very little about retrograde causality but don’t think it can be readily dismissed. I hope a physicist with clearest understanding will start a new thread on the topic.
There is actually one possible way to use entangled particles to communicate honest-to-goodness information; however, it works only if quantum mechanics, contrary to current mainstream views, is not really irreducibly random, but merely pseudo-random, i.e. shows randomness of the kind that can be generated by arithmetical means. Because then, if you have some way to distinguish between genuinely random and pseudo-random data with some reliability, Bob could, if he has access to some means of producing genuine randomness, just choose his measurement direction (truly) randomly, while Alice kept hers fixed; that way, she would either receive a pseudo-random string of bits (if Bob didn’t change his measurement direction), or a genuinely random one (if he modulated it). And presto! Superluminal (albeit noisy, because of the statistical nature of finding the difference between randomness and pseudo-randomness) communication.
Of course, to me, this is merely an argument showing that quantum mechanical measurement statistics are in fact genuinely random; but if I were an enterprising science fiction author in need of a way to communicate between his spaceships with a speed greater than that of light, that’s the way I’d turn. Not that many these days actually go through the effort of thinking up something plausible…
On another note, not to knock Indistinguishable’s fine explanation of Bell’s theorem, but, for those still needing a different tack on the subject, I find this explanation very clear and easy to follow.
Trying to follow the scenario: why would Alice receive a “genuinely random” bit if Bob has already made a measurement in a genuinely random direction?
Re: Chronos: Of course, Bob doesn’t actually need a source of true randomness, whatever that means; he just needs a source which passes whatever particular statistical test is being applied by Alice to distinguish it from quantum mechanical randomness. Bob could, in fact, actually very non-randomly craft his directional choices to pass the test.
(Well, maybe. I of course don’t yet understand the part I asked about, so maybe that won’t work. But, in general, I don’t think there’s any use trying to speak of the objective statistical “genuine randomness” of particular strings of bits as opposed to others; just, passes this test vs. passes that test, displays no correlation with this, etc.)
Here’s the paper the idea is presented in; basically, the notion of randomness that is used is algorithmic incompressibility – a string is algorithmically random if there exists no program substantially smaller than the string itself that can be used to generate it. This is actually equivalent to other definitions of ‘true’ randomness, such as Martin-Löf’s, so I reckon it passes muster.
As for what Alice receives, I probably wasn’t extremely clear that you use a string of N bits, N measurements, to communicate one bit; if this string is compressible, it’s taken to signify a 0-bit, if it’s incompressible, a 1-bit. If Bob has an incompressible bit string, and uses it to determine his measurement directions, his N-bit outcome string must be incompressible, as well; but then, Alice’s string must be incompressible, since you can obtain Bob’s string from Alice’s, and hence, every compressed version of Alice’s string would also serve as a compressed version of Bob’s.
Not sure exactly what you’re trying to convry, but…
Bell’s theorum shows that under a certain set of assumptions (that most deem to be reasonable) any hidden variables theory must be non-local.
Therefore explicit knowledge of these hidden variables (not jsut knowing they exist though) must lead to the ability to relay information at unlimited speeds.
Suppose there was an area of the universe filled tusk to arsehole with invisible giant pink elephants? What would that prove? The universe–our world–is all and only those areas we can observer or infer by observing consistent phenomenon. Nobody outside of an episode of Dr. Who has ever seen a “time-reversed star” or anything of the sort, and we have no evidence by which to infer that such a thing could even exist.
This is not a sensible question. I don’t know where this notion of “retrograde causality” comes from, but it is not to be found anywhere in scientific literature, and no actual physicist has a “clearest understanding” of it. There are ways within the framework of relativity to formulate closed time-like curves which have endpoints that terminate before the start in the time direction, but that isn’t a violation of (local) causality, as the system following that path is always experiencing time as advancing, and there are physical reasons to believe that such paths, like roller coasters, have to return to their beginning point before one can exit.
There are interpretations of quantum mechanics that are acausal (transactional, consistent histories, Everett-DeWitt many-worlds) and explicitly allow for time symmetric behavior, but there is no credible interpretation that could be called reverse causality (or as you refer to it, “retrograde causality”). The interpretation that comes closest would be the consciousness causes collapse interpretation, but outside of a few attempts at pairing Eastern mysticism with the conundrum that is quantum mechanics, and a few genuine physicists who have ventured far afield from any testable and falsifiable speculations, this isn’t given much credibility, and is largely regarded, like solipsism, to be a topic unsuited for civil discussion among intelligent adults.
There are actual techniques within quantum field theories that assume causal loops with virtual particles and other counterintuitive behavior such as the spontaneous generation or destruction of energy (non-conservation), but nobody accepts this as being “real” as they happen on time scales that are literally too small to measure, and always balance out to zero. They are instead accepted as mathematical formalisms that work and describe some process of which we do not have a complete or intuitive understanding; and indeed, any working physicist or engineer applying quantum mechanics to any real world experiment will “shut up and calculate” rather than noodle about with untestable philosophy.
I’m still reading through the paper, but perhaps I can save time by just asking you to clarify. Can you clarify why the two bolded bits are true? I’m afraid I don’t see it yet.
For me that the string is compressible, it means you can infer explicit if not exact knowledge about the hidden variables governing the system.
Seems to me very convoluted a sif you can have explicit knoweldge of the hidden variables governing the system then there must be a better way of communicating!
I don’t think Half Man Half Wit’s argument relies on any hidden variables… he’s talking about compressibility of such non-hidden strings as the choices of measurement directions and the results of those measurements.
If the string of data is compressible then it’s being governed by hidden variables (in my mind). The rules which govern a compressile string are in essence the hidden variables.
The way I read trhe paper is that it’s proving a weaker form of Bell’s theorum (i.e. that hidden variables theories were the variables are recoverable to some extent must be non-local as they allow spacelike communication)
The problem with the notion of algorithmic incompressibility is that you can’t always tell, in principle, whether a string is compressible or not, due to the undecidability of the halting problem. Given a string, you could of course take every string shorter than it, interpret it as an algorithm, run the algorithm, and see if it outputs the original string. At any given time, you’ll have some algorithms that have halted, and some that have not. But of the ones which have not, how can you tell which ones will never halt, and which ones will halt eventually and give you a string which might be your target?