Uncertainty principle a model of reality - how did that happen?

I was reading about Stephen Hawking’s idea that there are two kinds of time - real and imaginary. The terms correspond with the kinds of numbers used in the respective calculations - real numbers and imaginary numbers. Real time is time viewed conventionally and imaginary time is along some Cartesian axis orthogonal to real time, produced by using multiplication by sqrt(-1). Hawking ‘explained’ the problem of the beginning of time in A Brief History of Time by saying: in imaginary time there is no beginning, just as the planet doesn’t begin at the North Pole. For Hawking, this is a model of reality - time really never had a beginning. For others in the field, it is merely a mathematical method. For some problems, real time is the better way to calculate. For others, imaginary time is the better way. For them, the fact that one can work with imaginary time does not imply that time actually has any such component, and it does not illuminate questions of the beginning of time.

It’s different with the Uncertainty Principle. Heisenberg said that ‘the more precisely the position of a particle is determined, the less precisely the momentum is known’. This rapidly became elevated to the status of ultimate truth. It is at the heart of the mathematical toolkit of quantum mechanics. Discoveries in quantum wave mechanics revealed the so-called conjugate relations, of which position-momentum is one. Work on gauge symmetry revealed others. They are all taken as actually real by most physists, not merely as useful mathematical methods.

Virtual particals were postulated, and their existence defended, simply by arguing from the Uncertainty Principle. The existence of virtual particals is perhaps confirmed in measurements of the Casimir Effect. Hawking hypothesised black hole radiation (which is accepted as probably real) arguing from the Uncertainty Principle. Quantum pair production is generally accepted as a real phenomenon, and can be predicted from the Uncertainty Principle.

From early in the development of quantum mechanics, the Uncertainty Principle has been accepted as a model of reality and used to predict real, physical effects.

The question. What is and was so compelling about the Uncertainty Principle that it was taken to be an actually existing thing, rather than being used as a convenience in mathematics and regarded as incapable of describing reality itself?

Here is a history of the principle and its discovery.

The Uncertainty Principle (or as Bohr referred to it, the Indeterminacy Principle) is a fundamental result of using probability distributions to describe the characteristics of a particle. Why do we use probability distributions to describe characteristics? Because the “mechanical” behavior of a single particle, or a system of intertwined particles, can be described in terms of two non-commutative operators–for instance, momentum and position–which only give a discrete answer in combination.

Now, that answer was as clear as a Donald Rumsfeld press conference, so let me illustrate with a visual example. Think of a fluourescing spot that is painted on the sidewall of a car tire, so that its vertical velocity is zero when at the top or bottom of the tire and acceleration is maximum as it passes through the horizontal axis (in line with the axle). If you take a picture at any moment in time, and then blank out everything except for the point, you know its position exactly but without knowing anything else about the system (i.e. the velocity of the car, rotational velocity of the wheel, whatever) you have no idea how fast the point is going. On the other hand, if you contrive to somehow measure the instantaneous velocity of the particle but know nothing about its position, diameter of the wheel, et cetera, then you can’t get its true position. If you let the shutter of your camera stay open for a few thousand shakes, though, you’ll get a path, from which you can figure how far it travels and how fast it was going; although you can’t located the point at any given time t, you can gain absolute information about the behavior of the entire system. In fact, if you let the shutter long enough to go through a cycle or two, you can calculate the speed of the vehicle, period of tire rotation, how out-of-balance the tires are on your buddy’s Nova, and so forth. (Trust me, it’s great fun at parties, although the chicks never seem to hang around for the big finale, especially after you start dumping all of the data into Matlab.)

This isn’t a perfect example, because for all intents and purposes, the spot on the tire is in fact a discrete point with definite characteristics determined by the tire and wheel, but if you think of not having that information available this should give you some notion why we can’t calculate one except in terms of a range relative to how well we can measure the other. In fact, what you’ll get from your picture will be a waveform, which is interesting because the wave-particle duality nature of basic particles is a fundamental conclusion of indetermancy; you end up with something that looks like a particle (in the way you observe it) but interacts with other particles in wave-like behavior (interference, combination, superposition) while maintaining a discrete (quantum) set of characteristics.

Mind you, this isn’t just that the particles are moving or oscillation so rapidly that we can’t measure them; what it means, at least per the Copenhagen Interpretation of quantum mechanics is that all particles exist as a probability distribution in the underlying plenum (whatever that is) and don’t have discrete values; only characteristics described as statistical distributions with respect to the system at large. Is this a mathematical artifact, as you suggest? No; this exactly (but indeterminately) describes the behavior of an electron in its “orbit” around the nucleus of an atom, which would be classically untenable.

Many physicsts, most notably Einstein, objected to the Copenhagen Interpretation (resulting in his famous if oft misquoted and more frequently miscontextualized statement, “I cannot believe that God would choose to play dice with the universe,”) to which Bohr responded, “Einstein, don’t tell God what to do.” That the underlying nature of the universe is contrary to our classical expectations is just a result of how poorly we are able to see (and even more poorly comprehend) what actually happens and how limited discrete cause and effect is in describing things at the subatomic level. Or, as Bohr often said, “Anyone who is not shocked by quantum theory has not understood a single word.”

So when someone asks you “What’s shakin?”, you can say, “I am, and so are you.” The chicks eat that stuff up; take it from me.

Stranger

It’s because the UP is a theorem and Wick rotation (“imaginary time”) is a trick.

There are very few books I’ve seen that really properly talk about the UP, but actually it’s one of the few subjects in Penrose’s The Road To Reality that I have absolutely no complaints with. He really does an excellent job of it. I don’t think I can really do it full justice here, so I’m pointing you to that book. The upshot is: it’s really a statement about math that can be proven.

Wick rotation, on the other hand, isn’t a statement of anything. It’s a trick to make calculation easier in some cases. Basically, if we think of coordinates (t,x,y,z) where t is time, distance in spacetime is measured by something like

x[sup]2[/sup] + y[sup]2[/sup] + z[sup]2[/sup] - t[sup]2[/sup]

where there’s that funny minus sign in front of the t[sup]2[/sup]. Other than that it looks a lot like the Pythagorean theorem, and it’s that minus sign that makes a lot of things behave oddly as compared to our intuition. So physicists try to get rid of it by defining w = it, where i[sup]2[/sup] is -1. Then we have

x[sup]2[/sup] + y[sup]2[/sup] + z[sup]2[/sup] + w[sup]2[/sup]

but now other things have to be tweaked in this “imaginary time” picture like wrapping “time” up into a circle. Physicists have found that sometimes a problem that’s very difficult to solve in (t,x,y,z) coordinates turns out to be much easier in (w,x,y,z). It’s like how changing variables in an integration problem from calc 1 (I don’t know if you ever took calculus, but here’s hoping) can make it easier to solve.

The most satisfying explanation of it was given to me during a lecture by Raymond Hall of Fermilab:

It’s not simply a matter of us not being able to know both the position and momentum of, let’s say, an electron. It’s the fact that if all this information about a particular electron orbiting a particular molecule were available to other molecules, then chemical interactions would be dependent on the state of each electron in each molecule. If inability to know the states of these electrons were only our problem, then chemistry would be impossible. We would never know what would happen when two substances were brought together.

This is not the observed case, however. Chemistry is very predictable, because each molecule of a certain type is identical to each other molecule of that type, owing to the fact that the electron exists, for all outside entities, as a “probability cloud” around the nucleus of an atom, providing a shape for it.

In other words, uncertainty at the sub-atomic level is the foundation for certainty at higher levels.

I like the idea of imaginary counterparts to the dimensions we comprehend: length, width, depth, and those little curled-up dimensions found in superstring/grand unification theories. Perhaps the universe we percieve is only the Real cross-section of a much more vast hyper-complex omniverse, and Time is the one dimension in which we directly make contact, per Mathocist’s previous post x^2+y^2+z^2-t^2 AND x^2+y^2+z^2+w^2, defining w=it, wherei^2=-1

It’s also worth noting that the Uncertainty Principle is not exclusive to quantum mechanics. It, or something directly analogous to it, applies to all wave theories, including those known before the 20th century. If you’re describing the waves on the surface of the ocean, or waves in a guitar string, you’re going to run into something analogous to the UP. The fact that the UP shows up in quantum mechanics is just a reflection of the fact that quantum mechanics is a wave theory.

That sounds interesting. Can you give more detail. Why would it be impossible?

I can give Hall’s own words:

I got these from the notes from the lecture he gave in 1999 that I attended. They were on his website in conjunction with the conference I was attending. He had taken the slides down for awhile, because I looked for them a couple of years ago and they weren’t there. I e-mailed him and asked him to send me his slides, never heard back , but the slides are back on his site, albeit with several unfortunate typos. Click on the title slide, “The Mysteries of Quantum Mechanics Explained”. There’s also a link to his slides from another presentation I did not attend about setting your brain up with a “baloney detection kit”. Again, they’re slides that were meant to accompany a lecture, but they are pretty thorough, and ought to be of interest to a number of Dopers.

Anyhoo, take the case of two hydrogen atoms. If they were brought together and their positions were known, then how they interacted might depend on whether the electrons, and thus their specific properties, were each on the side of their resepctive atoms that faced each other, or on opposite sides, or in some other configuration. The same would go for any two atoms or molecules you brought together. You would never know what would happen in chemistry unless you could know the position and momentum of every electron in every molecule involved at the instant you brought them together.

Our experience with chemistry tells us this is not the case. Chemistry results are repeatable, meaning the atoms and molecules of each type are always identical. This suggests that the positions of the electrons are unknown not only by us, but by the molecules themselves.

What you just said. This is very interesting to me. I can see that chemistry would be different if electron orbitals were not Heisenberg clouds, but classical objects. It’s in a category I think of as intellectually crucial for a number of reasons. One is the subjectivity problem: if we are subjective because our bodies endow us with that mysterious property then the existence of subjectivity must depend on the properties of particles.

It was NOT accepted as a model of reality from the earliest days of QT. It was NOT accepted by most physicists (most famously Einstein) beacause of its increbible implications for the nature of reality. It was used, and taught, as a mathematical tool without (in all cases) understanding or explaining its implications.

It took a great deal of experimentation, and theoritical work, to change these deeply rooted beliefs (how hard it was to get people to accept it as a model of reality is a testament to how messed up quantum physics really is once you understand it).

The most important was the "EPR Paradox " this was (as was the more famous “Schrodingers Cat” experiment) a “thought experiment” that attempted to show that Quantum Physics could NOT be a model of reality. The EPR(Einstein-Podolsky-Rosen) paradox was based around a quantum event in the past that effected two objects, which are now some distance apart in space. By measuring the state of one particle you can imply the state of the other. This means that either Quantim Physics (and uncertainty principle) is correct and by measuring the state of one object the state of the other changes INSTANTLY no matter where in space it is (this “action at a distance” is an anathema to modern physics) , or it is not correct and there is some underlying theory that can explain reality equally well (or better) without the wierdness of the uncertainty principle (this is know the “hidden variables” theory).

However unlike “schrodingers cat” the EPR paradox CAN be tested experimentally. The scientist John Bell refined what was a thought experiement down to an “inequality” that could be tested in the real world. The result of this testing this inequality massively support the concept that Quantum Physics (and uncertainty theory) IS a model of the real world (Bell test - Wikipedia)

Well, the term is a 'real form". That is, given a complex vector space there are many different “complex conjugations” one can define on it, and these give various metrics of different signatures (numbers of + and - signs) on the real space fixed by the conjugation. In that case, now you ahve to explain why we see this sort of metric as opposed to any other, and why the signature doesn’t change when you move around in spacetime.

Exactly. For those who know a bit about Fourier analysis, recall that if we take a function f(x), transform it to F(p), and multiply by p we get the same as if we first took the derivative of f and then transformed it.

Now given a square-integrable function with square-integral 1, we can think of it as defining a probability distribution on the real line. That is, it picks a random number x with this probability:

P(a<x<b) = \int_a^b |f(x)|^2 dx

We find the expectation value (mean) by integrating

\mu_x = \int x |f(x)|^2 dx

and the variation by

\sigma_x^2 = \int (x-\mu_x)^2 |f(x)|^2 dx

Now, if f is integrable and square-integrable, its Fourier transform F is as well, in which case we can also do the same process for F

\mu_p = \int p |F(p)|^2 dp

\sigma_p^2 = \int (p-\mu_p)^2 |F(p)|^2 dp

And some functional analysis tells us that

\sigma_p * \sigma_x > k

for some constant k. The Heisenberg Uncertainty Principle is just applying this very general rule to the case where f is the wavefunction in position-space of a quantum system.