Correct, I was specifically speaking to the usual description of Schrodinger’s Cat, which I think is dumb.
As for Penrose and near-instantaneous decoherence of dissimilar macroscopic objects, my personal hunch is that it’s not true, at least in the types of cases we’ve been discussing. I think entanglement is being assumed where in fact only randomness exists. I would be very interested to see if experiments could be devised to measure the effect either way, though.
But that’s just it. Decoherence is what turns entanglement into randomness. In sufficiently macroscopic systems decoherence takes place so fast that for all practical purposes you can ignore the entanglement, and you get classical mechanics back (more or less).
I’m not sure I understand your question. It seems to me that the initial coherence is the basic assumption of the Schrodinger’s-cat paradox.
That is, the paradox assumes that I have a microscopic state, such as a radioactive atom, which is a coherent superposition of the two basis states, say
|decayed>+|not decayed>.
We all agree that such a small system behaves according to quantum-mechanical laws, so we can certainly prepare such a system. Now I begin entangling larger systems’ states with that state, until you consider it absurd to consider the whole system to be in a coherent superposition. Then I ask why the “measurement” (wavefunction collapse) happened at that point. Certainly QM allows me to imagine, for any system X (having at least two states X1 and X2) the unitary operation
(|decayed>+|not decayed>)|X1> -> |decayed>|X2>+|not decayed>|X1>.
How large must system X be (or what properties must it have) before you consider this to be absurd? Can X be “cat” (X1=live cat, X2=dead cat)? If not, how about “vial” (unbroken/broken)? I can continue to decompose the apparatus into smaller and smaller subsystems; at some point the very beginning of the “measurement” apparatus looks like another quantum system, so I should certainly be able to consider a coherent entanglement between the radioactive atom and this small system. So where does this coherent entanglement break down (if it ever does), and why?
This is the reason that “decoherence” schemes were proposed; they don’t rely on consciousness or anything weird like that, just on some difficult-to-shield interaction between our system and the rest of the universe. Gravity certainly falls into the “difficult-to-shield” category, so it’s one candidate for this interaction; but any other interaction terms could induce decoherence.
That is the assumption I don’t see any evidence for. The detector in the box (attached to the vial) collapses the superposition of the radioactive atom states, just as putting detectors over the slits of a two-slit experiment collapses the particle wave function as it passes through. Why would there be any entanglement beyond that?
To carry the two-slit analogy further:
Imagine I sent a single electron through a two-slit screen with detectors over the slits. I have a computer set up to record which slit it went through, and take different actions depending on the result. For example, for slit 1 it emails a naked picture of me to the President and for slit 2 it emails a clothed picture of me to the President. Is the President’s inbox ever momentarily in a superposition of containing a clothed and naked picture of me? Does the result change if I take the measurement myself and take the same actions based on the result?
Same situation as #1, but replace the two-slit apparatus with a coin flip. Is there any entanglement there, in either the case where I set up a double blind arrangement where the resulting action from each coin flip result is unknown or where I simply note the flip and take the action myself?
In none of these cases do I see any reason to posit entanglement beyond the atom/electron, so I’m curious as to what the answers would be from someone who does.
Suppose my “detector” is some small quantum system. As a theorist, I don’t care much about the details. But suppose for example that I send some other charged particle X (say, some heavy ion) through a tube between the slits as the electron passes through them, in such a way that X is deflected to one side or the other depending on which slit the electron passed through. This can be modeled by a Hamiltonian for the combined e-X system (of the form H[sub]e[/sub]+H[sub]X[/sub]+K, where K=k(x*P) is the interaction Hamiltonian and k is the interaction strength), so it describes a unitary evolution. Generally this evolution will create entanglement between e and X. There can’t be any “collapse” involved in this measurement, because if we want we can reverse the evolution, erasing the measurement result and removing the entanglement (the quantum eraser).
The question is just how big X can be before this stops working, and why.
In both of these cases, there’s currently no empirical evidence either way. In both cases, of course, if you want to measure entanglement you have to make sure you have the whole system available–if you don’t, your results will be completely indistinguishable from classical randomness (reducing the question to one of philosophy). Thus, sending e-mail is problematic; each TCP/IP packet has probably passed through several servers and switches, each of which has recorded some information about their passage. These all become part of the system; so do any of the fibers or wires that heated up as the information was transmitted, and so forth. Transmitting quantum information coherently (which is what you have to do if you want to maintain entanglement) is a technical challenge, even when the quantum systems are much smaller than the ones you are considering here.
In case (1) we start with a coherent state (an electron, in a coherent superposition of two states |left-slit>+|right-slit>), and try to entangle it with a macroscopic system (a bunch of computers and routers and wires, in your example). Under decoherence, the idea is that maintaining that whole macroscopic system in isolation from the rest of the universe is practically impossible, or at least highly improbable, so that the “whole system” you’d need to examine to test for entanglement rapidly becomes essentially the whole universe. So under the decoherence hypothesis, there would momentarily be entanglement between the electron and the system, but this would (very) quickly be lost in interactions with the rest of the universe.
In case (2), I don’t see any reason to suppose the initial coin flip places the coin in a coherent state. Even if there is some quantum source for the randomness, you’d have to do a lot of work to coherently transfer the randomness to the coin. So the coin is probably not coherent, and there’s probably no quantum entanglement there.
Can somebody actually clarify this point for me? What is the principle that requires a collapsed waveform to be collapsed from all reference frames? I am not sure I am using correct terms here (I’m a professional handwaver, but alas, not a physicist), so correct me a needed.
Consider the following thought experiment:
A blank universe with one atom A, that exists as [decayed] + [undecayed]. Add a detector X that detects decay, and make it interact with A. Now, from the reference frame of X the atom is either [decayed] or [undecayed], but overall isn’t it a natural assumption that the system becomes [decayed atom | detected decay] + [undecayed atom | did not detect decay]. If one adds another reference frame that interacts with the detector to read the value (say a person or a detector-detector), then it will observe one of the two possible values on the detector implying that the atom either decayed or it didn’t from its reference frame. As a whole the system becomes [decayed atom | detected decay | observed decayed status] + [undecayed atom | did not detect decay | observed undecayed status]. Where’s the paradox?
I’m with you on that. Some people seem to have the idea that quantum mechanics can all be distilled down to “Well, anything’s possible” (and relativity to "Everything’s relative) – regardless of the fact that this would be an utterly useless scientific principle. In part this is probably just people hearing what they want to believe, but I think it also comes from a mistaken tendency to interpret the metaphors physicists use when talking about the theory (to non-scientists) as a substitute for the theory itself. Part of the impetus behind my first post in this thread was that I thought the article linked in the OP was abusing metaphors (albeit not to the degree of some in the pop science crowd) to make quantum mechanics sound more analogous to classical physics than it actually is.
(In addition to wavefunction collapse, I should have mentioned the indistinguishability of particles as an area where quantum physics differs distinctly from our classical intuitions. Again, the article overlooks this issue completely – despite touting the exclusion principle which is of course a consequence of indistinguishability.)
I also think you have a good point that there’s a difference between something hich is currently untestable but which we could yet hope to invent a clever test for, and something which is untestable even in principle.
Getting back to this Schrodinger’s cat thing . . .
I agree with your skepticism of the idea that observation by a human is required for wavefunction collapse. That’s an idea that’s utterly at odds with our understanding of how the world works, and without any evidence to back it up. But what’s never been clear to me, even after taking my fair share of courses in quantum mechanics, is what exactly is the criterion for collapse? And how are these wavefunction-collapsing interactions fundamentally different than any other interactions? I’m not saying there is no clear answer, and if you have one I’d love to hear it.
Let me put it this way: if your interpretation is that any time a quantum system interacts with a “measurement device” the wavefunction of the system collapses into some eigenstate of the measurement being performed – basically, the usual Copenhagen interpretation – then what is it about the “measurement device” that gave it this collapse-causing ability? How do we identify a system as a “measurement device” in the first place? It can’t be the fact that we as humans use this device to determine properties of the quantum system, because we already agreed that it’s unreasonable to grant human beings a special status in our theory.
I’ve heard some people argue a different position that the one you seem to be advancing above. They say, nothing special happens when our system interacts with the measurement device. The evolution of the whole system – our original quantum system plus the measurement device – is unitary. If the original system was in a superposition of two eigenstates of our measurement, the relative phase information may be transferred to the composite system as a whole, but no information is lost. Then, the argument goes, the detector (which is macroscopic, or at least a component of a macroscopic system) has numerous interactions with its environment in a short span of time. So now there may be entanglement between the original system, the detector, and the environment – but the evolution of this tripartite system has been unitary the whole time. Relative phase information from the original system + detector has been transferred to the composite system of original system + detector + environment, but no information is lost. However, because we can’t measure the environment, we only have partial information, and from our perspective the evolution of the original system+detector looks non-unitary. I.e., because the system under observation is not a closed system, unitary evolution need not apply.
The problem with this line of reasoning – at least, I consider it a problem – is that if the original system was a superposition of two different eigenstates of our measurement, that superposition still exists in the state of the whole universe (original system + detector + environment). Because the evolution of the universe is always unitary in this approach (only subsytems of a larger composite system can exhibit non-unitary evolution), there’s no way to project the state of the universe into a state corresponding to a single value of our measurement. So why do we consistently find that our observed system has maintained the same value for the measurement? We’ve explained why we don’t find our observable system to be in a superposition of two states in the measurement basis, but not why we find it to consistently stay in the same state. I’ve seen this basic argument used to suggest that human consciousness becomes entangled with the wave function of the universe as well (see, for instance, Section 3.6 of John Preskill’s lecture notes, available here). But in my opinion that’s an even worse involvement of human consciousness than we wanted to avoid above – it’s saying there’s another version of my mind in the wave function of the universe, and he sees the opposite result for the experiment. To me, this is tantamount to denying the reality of the observable world altogether.
But if we throw out this line of reasoning (which I’m more than willing to do), we’re stuck with this – our measurement device really did produce a non-unitary change in the state of the universe. In which case, we’re back to the questions “Why?” and “How?” What is it about our measurement device that makes it able to change the state of the total system in a non-unitary way? What’s the actual property that distinguishes “measuring devices” (meaning devices that can produce non-unitary changes in the wavefunction) from non-“measuring devices”. And why should devices with this property act any different than anything else?