The Cat Came Back...... NOT!

I do not have the time
to put this to a rhyme.
But a little time I’ll take
to tell you of your mistake.
And be it not that clever
The Cat is the observer.

So, he is dead or alive before we open the box based on his only observation. Talk about deciding your own fate.

Agreed. Schrodinger’s Cat is (IMO) one of the stupidest thought experiments ever created, and is one of the driving forces for QM being used to justify a variety of mystical and crackpot philosophies.

The issue, in a nutshell:

  1. |dead cat> and |live cat> are not pure quantum states. You can’t superpose them.

  2. The poison gas vial + detector are the “observers” in this system. The cat is irrelevant – you could just as easily replace the cat with a ham sandwich. Thus, whether or not you open the box is meaningless – the system is in the same state it would be if there was no box.

Link to article in question.

I often think that some aspects of quantum mechanics are less weird than people think (there’s a form of “uncertainty principle” even in classical wave theories, for instance) but measurement is perhaps even weirder than people think.

If systems evolve unitarily, how the heck can measurements be non-unitary? I mean, what is a measurement besides the evolution of a bi-partite system, with one part labeled “oberver” and the other “observed”? Sure, if you’re observing an open system it might not evolve unitarily, but the wave function of the universe should still evolve unitarily, presumably including a bunch of terms corresponding to the other measurement outcomes you could have had but didn’t . . . .

The Schrodinger’s cat thought experiment isn’t really an adequate summation of these issues, however.

Sorry if that last post was a bit jargon heavy . . . I can try to elaborate in plain English for any interested observers (;)) but for now I’m off to bed.

Ket vector, cat vector. Tomayto, tomahto.

I don’t really know what you’re talking about, so do feel free to elaborate. However, I’ll note that “observer” is often a misunderstanding in QM. It’s not the act of observation (i.e. comprehension by a conscious being) that collapses a wave function into a single state, it’s the act of extracting information. Measurement requires interaction at the quantum level, and those interactions change the system. I think the two-slit experiment is the best example of this.

I’ve recently decided that this site would be significantly better if posters could give mods electric shocks.

Unfortunately, articles like Cecil’s (though clever) perpetuate a common misconception. The Schroedinger’s cat thought experiment was invented by Schroedinger to make the QM duality look bad not good. Schroedinger, along with Einstein, was against the Copenhagen interpretation. If you think the idea of a cat being alive and dead at the same time is absurd then you agree with Schroedinger and you get the point of the thought experiment.

I agree, “observers” doesn’t have to mean conscious entities. You could replace “observer” with “measuring apparatus” in my post above and my basic point remains the same.

Let me try to flesh out what I’m getting at:

Quantum mechanics tells us that systems evolve unitarily. That is, the state of the system at time t is given by |Psi(t)> = U(t) |Psi(0)>, where U(t) is a unitary operator. In particular, if the Hamiltonian of our system (in the Schrodinger picture) is independent of time, we can read off U(t) = Exp(-i H t / hbar) from the Schrodinger equation.

For those who don’t know what a unitary operator is, the key point to understand here is that it preserves the probabilities of different measurement outcomes. Let’s say we have a measurement that can tell if the system is in some state |0> or some other state |1>. Quantum mechanics also allows for the possibility that the system could be in a superposition of |0> and |1>, such that the measurement has some probability P[sub]0[/sub] of finding the system in state |0> and some probability P[sub]1[/sub] of finding the system in state |1>. Now suppose we don’t perform that measurement. Instead we wait a time t, and then perform a different measurement – one to tell if the system is in state U(t) |0> or state U(t) |1>. Because the evolution is unitary, those outcomes will also occur with probabilities P[sub]0[/sub] and P[sub]1[/sub], respectively. Unitary evolution preserves probabilities.

Anyway, there’s another way a system can evolve besides unitary evolution. Quantum mechanics tells us that if we perform a measurment, then the system changes to a state corresponding to a definite measurement outcome. That is, if we measure whether the system is in state |0> or state |1>, then afterwords we will end up either with precisely state |0> or precisely state |1>, even if we actually started with a superposition of the two states. We’ve changed the state so that even if we had a 50% chance of each outcome to begin with, an immediate second measurement has a 100% chance of duplicating the first measurement’s result. Probabilities haven’t been preserved, so clearly measurement isn’t unitary.

Here’s the paradox: Why should measurement (however we define it) be any different than anything else? A system interacting with a measuring device is really just a bunch of atoms interacting with a bunch of other atoms. Why should that system evolve according to different rules than any other collection of atoms?

One way people address this is by considering the system we’re observing as being part of a larger system, consisting of system + measuring apparatus. Before the measurement, the system is in state a |0> + b |1> (for some a and b), and the measuring device is in state |M>. So we can say the full state is a |0, M> + b |1, M>. Now in order to eliminate the paradox, let’s suppose the evolution of this combined system is still unitary even during measurement. The only change is that the state of the measuring apparatus now reflects the state of the system (since otherwise it wouldn’t be a measurement). So now the full state is a |0, M0> + b |1, M1>, where |M0> and |M1> are the two different possible states of our measuring apparatus after the measurement.

Note that the measurement entangles the observed system with the measuring device, so the observed system on its own is no longer in a pure quantum state. However, we can still describe its state by considering the density matrix, determined by taking the partial trace over the states of the measurement apparatus. I’ll take a = b = 1 at this point to spare me some typing, but it isn’t necessary. Before the measurement, the density matrix of our observed system is |0><0| + |0><1| + |1><0| + |1><1|. The terms |0><0| and |1><1| reflect the probabilities of measuring the system to be in state |0> or state |1>, while the terms |0><1| and |1><0| are “coherences”, basically reflecting the fact that we have a quantum mechanical superposition. After the measurement, the density matrix of our observed system is |0><0| + |1><1|, without the coherence terms. This indicates that there’s some probability of getting state |0> and some of getting state |1>, but the quantum mechanical superpostion is destroyed.

So we’ve derived the destruction of our quantum mechanical superposition from considering only a unitary evolution of the combined system (state + measurement device), without postulating some special non-unitary measurement. Problem solved, right? Well . . . not really. We get a result for the observed system that makes sense, but in this treatment the state of the combined system is still |0, M0> + |1, M1>, even after we’ve read off the measurement and seen that we’re definitely in, say, state |0, M0>. This is what I mean when I say the “wave function of the universe” has terms corresponding to measurement outcomes that could have occurred, but didn’t.

Some people say “Well, who cares? We can’t measure the wave function of the universe (or any macroscopic system) anyway.” This basically corresponds to the attitude that the purpose of physics is to predict the outcomes of experiments. Thus it doesn’t matter whether the wave function even exists, much less what it would mean if it includes terms that correspond to unrealized possibilities – all we care about is if the theory gives the right answers about things we can measure. Frankly, I think this position is bullshit. The point of physics isn’t to predict the outcomes of experiments. On the contrary, the point of experiments is to test our theories about how the world actually works. If we found a genie who could tell us the outcome of any possible experiment, but who refused to explain why the experiments would work out that way, no one would say “Well, I guess there’s no point in studying physics anymore.”

At the other end of the spectrum, some people feel that these unobservable components of the wave function have some sort of real existence (as in the Many Worlds interpretation of quantum mechanics). That is, even after we’ve made a measurement, the other components of the wave function corresponding to other versions of us seeing other outcomes of the measurement are just as real as we are. But many (most?) physicists feel that assuming the existence of a practically infinite number of unobservable universes is even more problematic than the problem it’s supposed to solve.

Like I said, measurement in quantum mechanics is weird.

I like kitties.

The “observer” is the mechanism which reads the state of the particle, I think is the point we should take from this. It is a bad analogy, you see.

tim314, that was perhaps one of the most lucid explications of the measurement problem I’ve read so far. Nicely done. (I have, by the way, more or less resigned myself to tentatively assuming that there’s probably some sort of objective collapse going on somewhere, at least for practical purposes. Helps me sleep at night till somebody thinks of something better!)

Thank you tim314 for the explanation, which was by far the clearest that I ever had on the topic.

But still I don’t get the paradox. It seems to me that all we have is a simple instance of a trivial probability problem. Either the system is in |0> or |1>. Then later it will be U(t) |0> or U(t) |1>. At some point in time we don’t know so we use probability distributions, and then eventually we will make a measurement and then we will know the state so we will not need probabilities any more.

If I were a little cynical, I would be tempted to suggest that the physicians are traumatized by the notion that there is someting they don’t know. So they don’t say: ‘we use probabilities to represent what we know about the universe’, they say ‘the universe is a probability distribution’ (i.e. a wave function).

And then of course they come to paradoxes about observations affecting their system. And so they freak out and then they have to poison cats. Or starve them. When I think that meanwhile, me and my fellow biologists get shouted at each time we kill a mouse. Tja.

So, is there something true about my interpretation ? Or am I still failing to undersand the real meaning of the Cat ?

There’s merit to that line of thinking, and it has been considered in the form of so-called ‘hidden variable’ theories – which basically assert that bare-bones quantum mechanics is incomplete, that there is another, inaccessible layer to reality where everything is nice and orderly and deterministic; i.e. where each particle has its proper state, which we only approximately describe when we’re talking about its wave function.

However, there are two ‘no-go’ theorems that are thought to exclude the possibility of hidden variables – one is the famous Bell theorem, which asserts that if hidden variables existed, certain inequalities should hold, which are observed to be violated by experiments (so-called Bell tests), and the other is the less known Kochen-Specker theorem, which basically says that it is impossible to embed the set of quantum mechanical observables within a set of classical quantities, basically because the algebra of the former need not be commutative.

For Bell’s theorem, consider a pair of entangled spin 1/2-particles (which are always found to have spin of either +h/2 or -h/2 along whatever axis you choose to make a measurement; h here is the Planck constant, which is often set to 1, but I’ll refrain from doing so here for clarity), such that the spin of one is opposite to that of the other – i.e. if you were to measure one particle along the ‘up’ axis, and the other along the ‘down’ axis, you get identical results every time (figure that a positive results means that the particle’s spin is in alignment with the measuring direction, and a negative one means opposing alignment). Conversely, measuring along identical axes, there will be an inverse correlation, i.e. whenever you obtain a spin value of +h/2 for one of the particles, you get -h/2 for the other. If measurement takes place along orthogonal axes, the correlation is 0, meaning that the measurements agree in 50% of the cases.

Now, the assumption in a hidden variable theory is that the particles come with a pre-fabricated spin orientation out of the source, which presumably is equivalently distributed across the possible spatial orientations, with both particle spins being at a 180° angle relative to each other – picture two clock faces, each with one hand: if the hand of one particle points to 12, that of the other one points to six, if one hand points to two, the other points to eight, and so on. This explains the measurements we have so far: if I measure along the ‘up’ axis, and the particle I measure has its ‘hand’ somewhere around 12, I’ll register a spin of +h/2; if my colleague measures along the ‘down’ axis, I know he’ll get a result of +h/2 as well, because his particle’s hand must be somewhere around six, and the other way round it’s the same.

Somewhat more complicated is the measurement along right angles: I measure again along the ‘up’ axis, and my colleague measures for instance along the ‘right’ axis – the one that would give a spin of +h/2 if the particle’s ‘hand’ stood on three. If the hand of my particle is, say, at eleven, I register a spin of +h/2, and my colleague, whose particle’s hand points at five, registers +h/2, as well, since five is nearer to three than it is to nine. If, however, my particle’s hand points to one, I will register a spin of +h/2, while my colleague registers a spin of -h/2, seven being closer to nine than it is to three.

From there, we can infer a linear relationship for the measure of correlation we expect between my and my colleague’s results: if the angle between measurements is 180°, the correlation is 1; if it is 90°, the correlation is 0; and if it is 0°, the correlation is -1, which, when fitted linearly, gives correlation c = 1/90a - 1, if a is the angle in degrees, or, expressed in radians, c = 2/πθ - 1. This would tell us, for example, if a = 45°, we should expect a correlation of -1/2, which works out to both measurements yielding the same results in 25% of the cases; let’s look at the clocks again: if I measure along the up axis, I get the result +h/2 whenever the particle’s actual ‘hand’ is in the upper half of the clock face, i.e. when it’s between nine and three somewhere; my colleague’s apparatus is measuring at an axis at 45° from mine, i.e. exactly between the one and the two on the clock face, and it registers a spin of +h/2 whenever the particle’s hand is between the points lying halfway between 10 and 11, and 4 and 5 on the clock face. So, whenever my particle is between 9 and 10.5, we both register spin +h/2 (with his being between 3 and 4.5); and whenever my particle is between 3 and 4.5, we both register spin -h/2 (with his being between 9 and 10.5, of course), so in total, on 25% of the clock face’s area, our measurements coincide, proving the assumption of a linear relationship between correlation and relative angle of measurement we have previously inferred must hold when the spin value is pre-set at the particles’ formation.

However, using quantum mechanical calculations to determine the expectations for finding the spin values, it can be shown that the correlation depends on the cosine of the angle of the two measurements, yielding, for example, a correlation of 0.71 for an angle of 45°; this now is an empirical difference between the assumption of existing, but hidden, exact values and the predictions of quantum mechanics; testing this, QM has been found to win out every time.

Oukile, the thing is that being in a superposition like |0> + |1> is a little different than just saying “|0> with 50% probability and |1> with 50% probability”. It is true that those are the probabilities of finding the outcome |0> or the outcome |1> if you measure in the {|0>, |1>} basis. However, if you instead measure in the {|0> + |1>, |0> - |1>} basis, you get |0> + |1> 100% of the time.

However, after you’ve measured in the {|0>, |1>} basis the system is no longer in state |0> + |1>. It’s either in state |0> or in state |1>, with a 50% probability of each. Note that |0> = (1/2)( |0> + |1> ) + (1/2) ( |0> - |1> ), and |1> = (1/2)( |0> + |1> ) - (1/2) ( |0> - |1> ). This means that if we now measure in the {|0> + |1>, |0> - |1>} basis, we will get |0> + |1> 50% of the time if we had state |0>, and we will get |0> + |1> 50% of the time if we had state |1>. In other words, we get |0> + |1> 50% of the time regardless of the outcome of the first measurement, which is half as often as we would have had the outcome |0> + |1> if we hadn’t done the first measurement.

So, to summarize: Measuring the state |0> + |1> in the {|0>, |1>} basis changed the probabilities of each outcome when the state is subsequently measured in the {|0> + |1>, |0> - |1>} basis. So the state really has been changed, and changed in a non-unitary way.

Thanks. I had the pleasure of hearing Tony Leggett give a talk on these issues a few years ago. He seemed to favor the idea that there may be some new physics that really does collapse the wavefunctions of macroscopic objects.

**Half Man Half Wit ** said:

Erm, it might be, if not for all that specialist notation cluttering it up.

Oukile said:

The term is “physicists”, to differentiate physical scientists from medical doctors. :slight_smile:
isgdre said:

But was he wearing a hat?

If you’re complaining about my use of Bra-ket notation, blame Giraffe, he started it. :wink:

Anyway, I tried to write most of it so all anyone needs to know about notation is that |x> labels some state of the system. I avoided notation like <x|y> since I didn’t expect everyone to know what it meant. (I got more technical in the paragraph on density matrices, but I figured the readers who don’t know what it means could gloss over that part.)

For the more mathematically inclined, I suppose it also helps to know that |x> is a vector, so you can multiply by scalars and sum them together in the usual way. Maybe some people would write it with a boldface x or some other notation, but what does it really matter?