Heisenberg Uncertainty Principle not so "uncertain"?

Let me start by saying IANAPhysicist and am not here to suggest overturning one of the hallmarks of physics. Just confused and hope someone can explain.

It is my understanding that one of the features of quantum mechanics is being unable to tell where a particle is and its velocity with great precision (the better you know one detail the less well you know the other detail). Further, when assessing where a particle is some probability analysis is done assigning the particle certain chances for being at a given point in space. So, from what I have read a particle travelling from Point-A to Point-B can take any of a number of routes and, in a manner of speaking, in fact takes all routes (as illustrated by the double-slit experiment).

Here’s the part I am confused on. This may be Newtonian thinking but I thought it still applied in the quantum world. It has been my impression that something like, say, a photon will take the shortest route possible when travelling from Point-A to Point-B. Indeed one of the ways General Relativity was first confirmed was by watching light bend around a large mass (the mass curved space itself making the “shortest” route through space a curved line rather than a straight line).

So, in an experiment where I shoot a photon at a target can the photon really take any possible path? If I time the whole thing very carefully wouldn’t I see the photon arriving at the target at a predictable time (the time it takes light to travel the shortest path from source to target)? If I do this then don’t I now know where the photon was on its travels? I know it is running at light speed and that is not variable. I know the distance it travelled and presumably there is only one shortest path so with a bit of math I should be able to know its velocity and position at any point during its travels with precision.

If I use something that travels at less than light speed can’t I determine the particle’s speed precisely at the outset and then by noting its arrival time shouldn’t the shortest route from A --> B again be the result? If not and it takes longer than expected at the least haven’t I narrowed down the possible routes it might have taken from all possibilities to the far fewer routes in which that distance could be traversed in X-time?

My understanding is that you cannot say anything about where the quantum particle is, or what it is doing, between measurements. Your example sounds similar to the EPR “paradox” have you read about that debate?

There is also a problem interpreting the sum over histories literally, just because the maths works doesn’t mean it’s a model of reality (whatever that is).

You can certainly restate this using electrons or something to make it seem less mysterious, but maybe stating it i terms of photons really does a better job, since electrons have wave nature as well.

Even in classical physics, light travels “along other paths” in the sense that light really is a wave, and its propagation is described by the Huygens-Fresnel principle that states that every point on the expanding wavefront itself acts as a source for a new wavefront. If you add up the combined waves from all the points on the wavefront at a later moment in time, you get the wavefront at a later time.
if this were not the case, then you wouldn’t see interference effects from light passing through slits. Forget about that double-slit experiment. I think it confuses people. Light passing through a single slit will show interference effects characteristic of the slit. It’ll do this even if you slow the rate of photons down so that you’re sure only one passes through the slit at a time. The photon interferes with itself in passing through the slit. This is hard to understand if you view the photon as a tiny hard baseball passing through the vast gulf of a cut out slit, but becomes easy if you imagine the light as a wave that fills the slit.
Okay. You can buy that – light is a wave and a particle, somehow.

But the same thing happens with electrons. You can do the same experiment with electrons, and get interference patterns.

So the electron has to be everywhere in the slit. But it can’t, if it’s a hard little particle. So the mental picture is wrong, and we can’t really know where it is by this sort of inference.

It is more than just being everywhere within the slit. I was re-reading Brian Greene’s The Elegant Universe and when he was talking about the double-slit experiment he mentioned that one possible path for the photon (or electron) could be a path to the Andromeda Galaxy and back. While this may have been some creative license I think his point was that ALL paths are far, far more than just the bit where the slit is. Perhaps the probability for a path to Andromeda and back is quite low but it is there.

So what I am wondering in my post is if you time all this very carefully haven’t you then put some very real boundaries on what paths your particle could have taken? Narrowing it down from some infinite number to something more manageable? If my particle takes 5 million years to leave my photon source and arrive at my detector then I guess a trip to Andromeda is in the cards. My guess however is that if I time it I will find it takes the shortest route to the detector everytime. Do scientists ever do these experiments and find that the photon seemed to take several minutes to cross the room (suggesting it took some long way around)?

Also, if we use photons then don’t we know its velocity (always light speed)? With that bit and knowing the distance from source to target aren’t we able to start putting some real boundaries on where the particle could have been between the two by knowing its travel time?

I haven’t read Greene’s book, but clearly at some point well before this the proposition becomes absurd. a Photon going to the Andromeda Galaxy and back won’t return in your lifetime, and can’t possibly interfere with anything if you set up the experiment and view it right away. It’s not that the possibility is vanishingly small, but that it’s not possible, with a finite speed of light.

What about much shorter paths, like backwards to the bacxk of the room before it somes forward again and goes through the slit? I see all sorts of problems with that, too, even though it’s not impossible because of lightspeed. But good luck trying to see any effect from it. Even if you have a mirror back there and deliberately reflect the light from a light bulb with ordinary coherence properties, it won’t interfere with the direct photon because you’ve exceeded the coherence length of the light. For any practical purpose, you’re only going to see effects from light rays that were initially travelling in almost the same direction from the same point.

Old Star Trek joke:

Q: How do the transporter’s Heisenberg compensators work?
A: Quite well, thank you!

I thought the photon interferred with itself? So even our Andromeda bound photon would still achieve interferrence (and yes the example was towards the absurd but call it artistic license to make a point).

You can’t even interfere with yourself if you exceed the coherence length over two path differences.

Sorry but you lost me on this bit.

How can a single photon decohere? I thought that even if they released one photon per hour an interferrence pattern still develops with the double slit experiment.

Unless you mean to say that the simultanaeity of the photon passing through both slits means that one part of it travelled to Andromeda (or the back of the room if you prefer) while the other part took the short route. I just assumed that the going through both slits part had to happen when the photon finally did arrive at the slits…even if it took some long way around.

With a single slit, the light (or electrons, or whatever) will simply show a diffraction pattern. If the width of the slit is on close order of half the wavelength of photon, you esssentially get a normal distribution. There’s nothing all that interesting about that; it’s all very classical, if you accept the fact that tiny perturbations aren’t going to allow each photon to make exactly the same path through the slit. However, with two slits that are positioned at the right increment relatively close to each other, instead of just getting the superposition of twi distributions, giving you a twin hump, you’ll get a diffraction pattern similar to two expanding radial waves on the surface of a pond meeting, with alternating heavier and lighter stripes on the target where the wavefronts of the photons are addative, and where they subtract out. This clearly indicates that photons have a wave-like nature.

However, we know from the the photoelectric effect (among others) that light also has a quantum nature, where a particular frequency of photon has a certain energy, and they don’t just add together. (Specifically with regard to the photoelectric effect, it was found that photons of a given frequency caused the surface of a metal foil to emit electrons; a larger quantity of photons of this frequency caused a linear increase in the number of electrons emitted; however, twice the number of photons at half the frequency caused no photons emitted, indicating that the energy in photons came in quanta, or little packets.) This is where we get the alleged “dual nature” of light (and other fundamental particles), though it’s not that they actually flip back and forth between one and another but rather exhibit properties of both in a way that we simply can’t analogize to our macroworld experiece. As a result, we treat the situation mathematically as being sometimes wave-like and sometimes particle-like as fits what we’re trying to do.

Heisenberg himself referred to the characteristic of the now eponymous principle as “indeterminacy” (and discouraged the application of his name as being associated with it in titular fashion), which is probably a better, or at least less confusing description. In general terms, Heisenberg’s genius was in throwing away all assumptions about anything like classical Newtonian behavior on the scale of subatomic particles and assigning them properties that were “quantized”; that is, occur in discrete steps. This got rid of the whole problem with bizarre, three-dimensional atomic orbits which required constant radiation of excess energy in order to be stable; however, it was a seemingly artifical hack that worked for no good reason whatsoever. However, the hack has continued to work, and work very well, to the point of being the essential basis for a vast subfield of natural science that has been very, very successful in predicting a wide range of cool behavior.

The most common expression of the intederminacy principle is that the change in position times the change in momentum has to be greater than or equal to the modified Planck constant (hbar) divided by 2. This gives a lower bound on how accurately you can determine these qualities about a particle, and more interestingly, a limit on how definitely you can indentify the particle (which is otherwise indistinguishable from other particles of the same type) in the future. There are other formulations, all also involving two characteristics in an inequality, that are equivilent to this. Again, this all seems very arbitrary, but it works, which is more than one can say for Reaganomics or British automobiles.

The net result is that, yes, an unrestricted photon not only can, but does travel all possible paths to get to a particular endpoint, and the “speed” of the components of an individual particle varies based on the paths they follow. Even more strangely, it communicates “with itself” along the way; for instance, if you emit a photon into a waveguide (such that it is reflected and can’t leave the waveguide until it gets to the target) and then split it between two waveguides that take widely seperated paths, it’ll communicate “with itself” such that detection of the photon along one path will prevent the creation of an interference pattern on the target. The only accepted way to cope with this is to adopt the notion that there are either some kind of “hidden variables” by which the photon “agrees with itself” about what’s going to happen before it splits up, or that there are “non-local” (i.e. instantaneous) connections between the various bits of photon that are in constant communication despite being physically seperated. Most of the guys working in QM grew to accept this as a matter of course without getting too worked up about the apparent paradox of the whole thing–Neils Bohr famously made the pronouncement “It is wrong to think that the task of physics is to find out how nature is. Physics concerns what we can say about nature,”–but it did and still does make for some great late-night dorm room bull sessions about the Whole General Wiggy Mishmash Of It All, and What It Means In Terms Of The Nature Of Reality, and How We Can Use This To Turn A Profit. (The latter seems to be the most difficult problem of all in quantum physics, and despite the publication of numerous popular science works by the likes of Richard Feynman, Brian Greene, and Lisa Randall, most people working in fields relating to quantum mechanics live in the sort of quiet, near-ruinous despiration that leads to great starving poetry, unfashionable attire, and university tenure.)

Anyway, adopting the principle of indeterminacy, along with Bohr’s model of the atom (or rather, the valence shell theory derived from it) and his notion of complementarity (the idea that the wave and particle natures of quantum entities exist simultaneously but are expressed in complementary proportion to one another depending on which one you’re trying to measure at the time and how much you squink), allows for describing various diffraction and interference phenomena in quantum terms (the field of quantum electrodynamics, or QED), but requires that you accept that, on the quantum level, photons can actually travel a path from Point A to Point B that requires them to move faster than classical electrodynamic theory or Special Relativity allows.

Are we worried about this? Not particularly; the average velocity of the particle is still c (or for massy particles, less than c), and quantum mechanicists are very relaxed about the fact that this is but one of a whole range of phenomena that is totally and completely impossible on everyday scales but which nonetheless works out with mathematical treatments and the validating experiements that are performed on the behavior of fundamental particles and their first level constructs, like nucleons such as the proton and neutron. By the time you get to the point of dealing with collections of particles, whether they be a group of millions of photons, or a collection of atoms jumbled together via ionic or covalent bonds, this is no longer an issue, and the completely ridiculous behavior of the extremes is lost via decoherence in the average, and all the odd bits wash out so you’re left with only behavior that is agreeably in line with Newtonian and Einsteinian mechanics.

In short, going about trying to apply things like “logic” and “reason” and “Newtonian mechanics” to quantum entities is a fool’s errand that will only give you migranes and make you consult your occulist more frequently than would be otherwise necessary. Go out and enjoy the sunlight without thinking about the fact that its composed of individual particles that have filled the entire universe with probability waveforms existing in an entirely abstract and unmanifest state all the way to the point at which they contacted your retina and collapsed just so it looks like they came straight from the Sun without a bunch of circuitous touristing. And never mind the fact that everything you believe to be solid and immutable, inclulding the materials that make up your own body, is actually just a collection of interfering waveforms that could spontaneously and discourteously disappear at any moment leaving you–perhaps literally–without a leg to stand on. Don’t worry about any of that at all. We can say almost definitely–with only a very, very slight amount of uncertainty–that it won’t happen, or at least, not in the next ten minutes.

Sweet dreams.

Stranger

When they say “all paths”, they really do mean “all”, including the ones which loop around the Andromeda Galaxy. These individual paths are not constrained by the speed of light. However, to get anything actually observable, you have to add up the results of all of those possible paths, and when you do that, the result does always respect the speed of light. This only works if you actually do include all of the paths: That path out past M31 cancels out some other path very close to it, so if you neglect one of those paths, the other will remain uncancelled, and you’ll get the wrong answer.

It should be admitted that, in most circumstances, it is very difficult to include all of the paths in your calculation. Aside from a few toy cases like the free particle and the idealized double-slit experiment, this method isn’t actually used much in doing actual calculations, since there are other methods which are exactly equivalent but simpler to use. Still, it’s perfectly valid.

I could use a bit of clarification on “all paths” if those who know will humor me.

If I setup an experiment with my photon gun pointing at a detector 100 meters away and I setup another detector 25 meters ahead and 25 meters to the side will the closer detector that is not in a straight line to the photon source be the one to detect the photon first? Not sure I am explaining this well but if the photon takes all possible paths to the target straight ahead then one such path should persumably be the detector off to the side. If that detector gets the photon first then I would not expect to see the photon arrive at the detector further away.

My instinct here is that the detector to the side will not register a hit and I should expect the photon to go straight ahead to the more distant detector but clearly gut instinct is one of the first things to go out the window with all this. Yet I cannot see how the photon could take all possible paths and not be detected by the nearer detector.

I have a question. I’ve heard this explanation a million times, but I’ve never heard the actual mechanics of how you let loose and then measure one single photon without letting loose any additional photons. Is it really possible to send only one photon through a slit, or is that a metaphorical dumbing-down of the actual procedure in order to explain it to laymen? If it’s possible, how is it done?

I appreciate that while scientists know they have not reached the end of the road on explaining all this are they really ok with all this apparent weirdness and seeming paradoxes?

That scientists can produce end results that work fantastically to produce reliable results I do not think absolves them from ignoring serious, “WTF just happened here?” moments.

Once upon a time we had a geocentric view of the universe. While cumbersome to work with if one tried hard enough predictions could be made using it and it worked ok. Maybe some problems here and there but usable. Then Copernicus and Galileo made some nifty observations and poof…a far more useful view of how the planets move popped out.

So while a decent answer may be massaged out of this or that equation do scientists really feel that this is indeed how it all works? Just pay no attention to the man behind the curtain and accept it this way? Or do they more feel that while they can do useful work with what they have they are in the Dark Ages presuming the geocentric model and waiting for someone to invent the telescope?

In effect, they keep piling in neutral density filters untiul the photon flux is so low that they can be reasobnably sure that they’re getting single photons going through the system.

it occurred to me over lunchtime that there’s a deep and fundamental issue here (not quite as deep as photons travelling to Andromeda, but still pretty deep)-- “coherence length” only makes sense if you’re looking at a light source with a spectral spread. A single photon can’t have a spread, so it ought to have perfect coherence. But there ain’t any such thing – any real light source, at the very least, gets turned on and off, and a perfect sine wave has to have been on forever and never get turned off, keeping the same amplitude all that time. No real light source has a delta-function spectrum. But we’re talking a single photon here – surely if it ‘interferes with itse;f’ it must have the very same single wavelength. But whenever I have heard of these single photon interference patterns, no one has such extraordinary claims for the result. If they had, I think I’d have heard them.
that said, I only know of these sorts of experiments second-hand. It’s probably time to dig these up and see what the original papers actually did say.

Most scientists pay little attention to trying to explain the universe in their everyday lives. They run experiments, or work on knotty questions of theory. Most of it is fairly small scale, as you can tell by reading the titles in any science journal.

This is true throughout the sciences. Stephen Jay Gould made his reputation classifying beetles for years before going on to pontificate about evolution. All of this basic knowledge is a necessary underpinning for making any grand statements. And it all can be done without caring about the man behind the curtain.

Only a small minority of working scientists attempt to do grand theory. Those that do are as worried about the nature of reality as you are. That’s why unified theories and theories of everything and the structures of string theory or loop quantum gravity exist. They are attempts to provide deeper and more basic understandings of reality that will answer many of these questions.

Answering the questions will not change the behavior, however. Quantum theory works. Particles really do behave in these “counterintuitive” ways. Any larger theory will have quantum mechanics “fall out” of it as a consequence.

And there is no necessary reason to think that we will be able to fully understand the answer if and when it comes. Why should we - tiny, short-lived, macroscopic creatures - be able to understand the basic subatomic nature of time and space? That may be hubris even to assume the answer is possible.

Well, depending on how broadly you define “how it all works”. To me Heisenberg uncertainty principle works equally well outside of QM in plain old Newtonian mechanics, and I can’t see how you would meaningfully redefine things to make it go away. We define momentum in terms of velocity and mass. We define velocity as change in position over time. The very idea of momentum therefore depends on some duration of time and change in position, as such momentum at any particular point in time and space is simply an abstraction. Momentum is property of something during a period of time or within a span of space. Position is a property of something at a point in time. Unless you find a way to define some thing’s velocity entirely independent of passage of time then you’re stuck with the fact that you’re either moving and you only have a vague idea where you are, or you know where you are but you have no clue where you’re going. Modern physics just gives us Planck constant to put a face on this conceptual problem, but the logical problem exists without it.

Just divide the infinities out, normalize the result, and everything will be fine. Just fine.

The problem (and what is often not made very clear in nontechnical explanations of interference experiments) is that once you detect the the photon (or electron, or whatever) you interact with it, and thus it can from that point only have travelled on that path. This will result in a “collapse of the wavefunction” (or whatever rationale you prefer), and the interference pattern produced in the case of the two slit experiment will disappear. This isn’t some foofy New Age-y pseudoscientific babble; by observing the particle, you interact with it and affect the result. Note that there is no indication that consciousness is required to “make this decision,” although clearly for us to have any awareness of it we have to inspect the system in some way, either by directly opening up the box and looking at the cat, or inquiring to Wigner’s friend about the state of things. However, before we look, all states/paths exist simultaneously.

Well, you can’t observe a single photon without actually in some way interfacing with it, i.e. absorption or stimulated emission, and once you do this, you both lose track of the original photon and impact the result. More often, the double slit experiment is performed with electrons because they’re a bit easier to cope with, though it’s been performed with larger particles like protons, neutrons, and even large uniform carbon molecules on the ragged edge of decoherence (i.e. behaving like a classical object). You’re correct, though; you can’t measure a single photon without somehow absorbing it or producing another photon. We can’t just “look” at a passing photon for the obvious reason that it doesn’t radiate anything.

Let’s say that the more you study this you become accustomed–or perhaps resigned–to not really having any clue about the underlying mechanism. This isn’t to say that many haven’t tried; Einstein spent many years trying to find some flaw in reasoning that would resolve the apparent paradox and explain the phenomena of QM in a rational system. The harder he tried, though, the worse it got; his EPR paradox was an attack on the Copenhagen interpretation of QM, but ended up validating the notion of non-locality, placed it at serious odds with Special and General Relatiivty. The late Irish physicist John Stewart Bell extended this to his famous inequality, which led him to support the notion of nonlocal hidden variables (i.e. agreements or connections between particles or systems that are not connected “locally”, i.e. within the framework of SR) and an enthusiasm for the Bohm interpretation of QM.

Einstein himself believed that there are “hidden variables”, which was the impetus for his famous, oft-miscontextualized quote about God not playing dice with the Universe, but he wouldn’t accept that these variables were or allowed nonlocal communication. Bell demonstrated, however, that hidden variable theories that depend up on local realism don’t work in the context of the interaction of nonlocal events. You can imagine this in terms of the two groups each building half of an engine, each locked away in different buildings with no lines of communication. If they’re not allowed to communicate (i.e. they’re “non-local”) then the only way the pieces will match up is if they’re both working to the same set of prints (the “hidden variables”); however, you can’t know enough about what differences each team builds into their parts during manufacture to assure that the resultant engine fits together. The only explanation for how it can possibly fit togehter (and it always does) is that the two teams must have been communicating somehow, despite the fact that we’ve stipulated that no conventional means of communication are allowed. How did they do it? Telepathy? Remote viewing? Messages passed via the cleaning crew? A VPN tunnel? We don’t know, and we can’t reason it out in any terms that make sense in our everyday experience, indicating that there is something seriously incomplete about our understanding of the underlying mechanics of QM.

The problem with trying to figure this out, though, is that all of our tools and senses for interpreting these results are also governed not by classical laws (which is often assumed in gedankenexperiments like the Schrödinger’s cat experiment) but themselves by quantum phenomena, which ultimately limits our ability to say anything definitive about the results. This is widely considered a fundamentally insoluable problem, or at least unprofitable from a perspective of producing a viable thesis, but it’s also deliciously metaphysical, and people who declare a problem as irresolvable generally have a way of dining on Corvus brachyrhynchos, so I wouldn’t go so far as to say that we can’t know; merely that we don’t know.

Stranger

Yeah, that’s what I thought. I always figured that they’re actually looking at a collective result, and have a good reason for assuming that the particles aren’t in fact interfering with each other at the same time, even though the explanation that they’re sending a “single particle” through the slit isn’t really empirically observable. So is the idea that only a single particle goes through the slit at any one time an inference based on collective data? I’ve never really understood this.

In all fairness, this is the guts of science; Big Problems are glorious things to write popular science books about, but real advances usually mean working on “little” problems which occasionally segue together with other “little” problems to make big, integrated theories. Grand theories usually collapse under their own weight and inconsistancy unless you’ve started from the foundation and built upwards.

Er, no. You’re conflating the notion of relativity–that there are no priviliged reference frames–with fundamental indeterminacy. These are two very different concepts.

Applications of the indeterminacy principle break down on large scales (roughly on the scale of molecules) because their distribution is effectively much less than the size of the molecule; with the exception of a few vary rare, exotic states, matter on the scale that we regard as a solid or fluid continuum acts in a manner that is in line with classical mechanics to our ability to measure, or indeed, even perceive it. Now, it’s true that even classical, determinist systems can display behavior that is fundamentally unpredictable–so-called chaotic or highly perturbative systems–but this doesn’t require any application of quantum mechanics to explain, and it doesn’t obey Heisenberg’s famous inequality, but rather the complex mathematical theory derived from the work of Poincaire and Lorenz on dynamic nonlinear systems.

Stranger