FAQ 
Calendar 


#101




Quote:

#102




I've never understood how radioactive decay can be said to be a quantum superposition; I would have thought that it met the standard (whatever that is) of having undergone wave function collapse. Now I can understand an unstable nucleus being in a superposition of different energy states, but if sending a gammaray photon off into the universe at the speed of light isn't a wave function collapse, I don't know what is.

#103




Quote:

#104




In fact the linear nature of quantum mechanics means that given any two (pure) states the superposition of those two states is also a possible pure state.



#105




Quote:
P.S. I'd never heard that interference could be shown for decay events cite requested. 
#106




Quote:
The fact that we don't see evidence of the superposition of macroscopic states of macroscopic objects, even if we try to amplify the microscopic superposition a la Schrodinger's Cat, was initially a mystery to quantum theorists. However these days it can be explained by decoherence, but what happens during decoherence is more subtle than the destruction of superposition.It is worth noting that at the moment there are still some very major open questions concerning quantum measurement. The most basic model of radioactive decay is the semiclassical model of alpha decay. In this model an alpha particle starts in a potential well which models the nucleus, however over time its wavefunction "spreads" to the region outside the nucleus. Before a measurement is made there is a probability that it will be found both inside the nucleus and a probability it will be found outside the nucleus and so in this state it can be said to be in a superposition of decayed and undecayed states. The semiclassical QM model gives predictions for alpha decay which, whilst more ballpark than super accurate, are surprisingly good for such a simple model. There are of course are more complex quantum models which give more accurate predictions. An alternative, but ad hoc model to the quantum model of radioactive decay might be that the atom randomly decays at a definite time with the probabilities that are observed. But in which case why does the quantum model give us the power to predict those probabilities where the ad hoc model does not. Also the ad hoc model might struggle to explain the quantum Zeno effect, which has been demonstrated experimentally, whereby continually observing an nucleus can prevent the wavefunction from "spreading" to the outside region and hence prevent nuclear decay. If you don't like these implications of the standard textbook interpretation of QM and you need an "out", in a QM interpretation like Bohmian mechanics particles always have a definite position and hence nuclear decay takes place at a definite time. 
#107




I don't understand why, by that standard, you can't say that the uranium atoms in the Big Boy bomb are still in a superposition of having fissioned and not fissioned.

#108




Quote:
I think the three main things to understand to about Schrodinger's cat in no particular order are: 1) Decoherence. This tells us that interference effects become suppressed in a very, very short period of time when a quantum system couples to its environment. For an object like a cat this coupling to the environment is for practical purposes impossible to prevent. 2) The quantum mechanical measurement problem. This is that, despite QM having been around for almost a century, there is still no universally accepted answer as to what exactly constitutes a measurement, despite measurement seeming to feature very prominently in the theory. 3) Interpretation. This is that there are multiple different interpretations of quantum mechanics and that when talking about the philosophical implications of a thought experiment like Schrodinger's cat those implications will be heavily dependent on how you interpret QM. 
#109




Quote:
(Now there's a thought: what happens in the doubleslit experiment if the barrier is monatomically thin and there's a measurable chance of tunneling through it?) 


#110




Quote:
Quote:
ψ_{1} + ψ_{2} (I'm being very schematic here, so I won't worry about normalization and other details). Think dead cat + live cat, right slit + left slit in a double slit experiment, something like that. Now, in QM, detection probabilities are obtained by squaring the wave function (actually, since wave functions are generally complex, multiplying the wave function with its complex conjugate). Squaring the above yields ψ_{1} + ψ_{2}^{2} = ψ_{1}^{2} + ψ_{2}^{2} + 2Re{ψ_{1}ψ_{2}^{*}}, where 'Re' denotes taking the real part, and '*' denotes complex conjugation. Note that this probability distribution differs from the one you would have obtained if you had simply added the probability distributions corresponding to the two terms in the superposition (that is, ψ_{1}^{2} and ψ_{2}^{2}) by the third term: this term describes the interference. In fact, the first two terms are what you would obtain if, for instance, in a doubleslit experiment you keep track of 'whichpath' information, that is, you measure which slit the photon traversesthe interference pattern vanishes, as the superposition is destroyed, and is replaced by the straightforward sum of the probability distributions obtained from the individual terms in the superposition. The third term, however, is generic whenever you have a superposed state; and, since it may be negative, it can either reinforce or suppress detection probabilities, as compared to the case in which there is no superpositionwhich is just what interference is. Note that this doesn't imply such an experiment is practical; indeed, nobody's ever going to measure the interference term between dead and alive states of a cat. But the possibility is there, in principle; if you want to get rid of it, you'll have to modify the structure of quantum mechanics (such as introducing a spontaneous wavefunction collapse). 
#111




Quote:
To illustrate, let's say an experimenter makes a quantum measurement on a particle whose result is "A" or "B" with (just for the sake of symmetry) a 5050 chance. The traditional interpretation says the wavefunction of the particle collapses into either a state corresponding to a result of "A" or into state corresponding to a result "B". The projection postulate which describes collapse seems a bit ad hoc, but a strong point in its favour is that it is consistent with what we observe. However we soon find that this leads to a philosophically very knotty problems. It seems perfectly consistent with quantum mechanics that, instead of using the projection postulate, we can describe the measurement as the coupling of particle to the measurement apparatus. But this description seems to be very different from the description given by collapse as what we find in this description is that the quantum system of particle+measurement apparatus is in a superposition of states corresponding to the measuring apparatus reading "A" and the measuring apparatus reading "B". Let's say our experimenter comes along and reads the result of the experiment, surely we should insert the projection postulate now? However again it is perfectly consistent with basic quantum principles for the particle+measurement apparatus+experimenter system to be described as a superposition of the experimenter reading "A" and the experimenter reading "B". Let's push it even further and say the experimenter tells his friend Eugene the result of the experiment, perhaps now is the right time to use the projection postulate? Yet again though it is still perfectly consistent for the particle+measuring apparatus+experimenter+Eugene system to be a superposition of states of Eugene having been told "A" and Eugene having been told "B". And so on ad infinitum. Decoherence offers a bit of an insight as, realistically, the measuring apparatus and the experimenter and Eugene are already coupled to the environment. So after the particle couples to the environment, if we consider the subsystems of the overall system such as the ones mentioned in the previous paragraph, we will not be able to detect interference. However this absolutely no good for resolving at which level collapse occurs as the overall system can still be described as being in a superposition of states corresponding to the two different outcomes. Decoherence merely tells us that there will be no detectable way to determine at which stage we should insert the projection postulate. The Copenhagen interpretation (at least as far people can even agree on what the Copenhagen interpretation is) is pretty much the interpretation that the philosophizing should be lefty the beardy weirdos in the building across the campus and we should just invoke the projection postulate at whatever point is most convenient for us to do so and be happy that our predictions match the results. However this isn't satisfactory resolution for many people and the are alternate interpretations. A popular alternative to such an interpretation is the many worlds interpretation, which simply does away with the projection postulate and says that the overall system is in a superposition of states. This interpretation has been extended to the idea that each alternative outcome constitutes a separate "world" existing in parallel to the other "worlds". However this brings a different set of problems such as how to we define individual "worlds" from the wavefunction? Why don't we experience the overall superposition if that is the reality? How can you even define the probabilities that are the predictions of QM if all the alternatives actually exist? Another alternative is Bohmian mechanics which also does away with the projection postulate and is entirely deterministic. Bohmian mechanics particles are similar to classical particles in that they always have definite positions, but they exhibit the weird quantum behavior seen due to the influence of something called the quantum potential. This interpretation though still has plenty of philosophical baggage that lead many to reject it. In particular it involves seemingly causalitybusting superluminal influences on particles; it has hidden variables which cannot be precisely determined and which can argued to be unnecessary to quantum mechanics when Occam's razor is applied; the resemblance to classical physics is superficial it has plenty of its own weirdness. Overall each different interpretation seems to have pros and cons, but the reason there are still so many interpretations is that there isn't one (at least yet) where the pros are better than the other interpretations pros and the cons are less worse than the other interpretations cons, at least as far as most people agree. You could in theory describe a double slit experiment where the barrier with the two slits in it has some significant transmission probability. However I think actually finding a solution for such a setup would be hideous and unlikely to be particularly instructive. Last edited by Asympotically fat; 05152017 at 06:01 PM. 
Reply 
Thread Tools  
Display Modes  

