Gary Zukav’s book came out in 1979, but its title was, as @eschereal indicated, The Dancing Wu Li Masters with that spelling. I can’t find any indication that wu and woo are related through him.
Veritasium has a good video on this subject. (At least it sounds good to my non-physicist ears.)
That Chinese phoneme (roughly the same sound as English “woo”) has always been transliterated as Wu. It’s the same in the old Wade-Giles system, I don’t think any system has used Woo.
I agree that there’s no etymological connection in this branch of the multiverse.
Correct, but it’s worth adding that the book title did not consist of the unambiguous Chinese characters, and although the obvious primary meaning of the transliteration Wu Li is “physics”, both wu and li are common phonemes that correspond to a lot of other characters and meanings, so a lot of punning is possible.
Just for a bit of fun I burned up a few minutes trying to get closer the when woo became a synonym for new age psuedo-science.
There are clear uses of woo-woo that go back decades or even longer - but they are different meanings or uses. The sound of birds (especially owls), ghosts, or possibly even versions of whoa. The sound of a Theramin also gets cited.
There is however the centuries old slang phrase pitching the woo which is basically talking nonsense - usually portrayed as a guy talking nonsense to a girl (probably with nefarious intent), or also a salesman pitching. This might derive from woo as a synonym for courting. In this case “woo” possibly derives from bending/inclining someone. The idea of a salesman pitching the woo as an origin is not bad. But it wasn’t in common use.
In terms of usage in conversations on science, it seems to date back to the early 1980’s at the earliest.
Scientists talking about spiritual interpretations of quantum physics may well have labelled the whole thing quantum woo, in reference to the Dancing Wu Li Masters. So I am going to stick with the idea that there could be a real link. The gap from the book appearing in 1979 and use of woo in describing psuedo-science 4-5 years later is pretty compelling. Of course it may well have been a happy confluence that woo could be brought back into use in this way.
The recent Nobel Prize regarding work on quantum entanglement is relevant as it was what is considered to have ruled out the “hidden variables” answer.
Bell inequality violations don’t rule out hidden variables; they merely imply that if there are hidden variables, they influence one another in a non-local way. Bohmian mechanics (of which Bell was one of the most influential proponents) is a theory in which the quantum formalism is augmented with (‘hidden’) particle positions that always have a well-defined value.
In Bohmian mechanics, the particles are ‘guided’ by the wave function, which yields the characteristic interference pattern. There is also a clear answer as to what causes the vanishing of the interference pattern in the Bohmian picture: if you place a detector into the experiment, what happens is that the particle will become entangled with the detector, and the combined wave function of the detector and the particle will determine the trajectory. But it can be shown that, for a detector of appreciable (macroscopic) size, this guiding wave function will no longer contain significant interference terms—that’s just the mechanism of decoherence.
In fact, this story, sans the definite particle positions, can just as well be told in ordinary quantum mechanics. Add the detector into the experiment, the reduced wave function of the particle will no longer contain interference terms (for all practical purposes). And calculating the probabilities of finding the particle at the screen will yield excellent agreement with experiment. Many people are satisfied with this, but it’s an incomplete explanation: the state you get out using that procedure is one in which the particle has a certain probability to be at any given point on the screen, but—contrary to Bohmian mechanics—doesn’t have any particular well-defined position.
So, according to the quantum formalism, merely obtaining ‘which path’-information doesn’t help you explain the observed detections at the screen, although it does allow you to derive their distribution. It’s here that we need to get into the question of interpretation. The Bohmian option has already been mentioned: there’s simply an additional particle trajectory added to the dynamics, and that explains why you detect the particle at a given point.
A proponent of the Copenhagen-interpretation would just say that, while you can use the quantum formalism to describe anything, you can’t in a given experiment use it to describe everything—the measurement apparatus itself must remain beyond the quantum description, so as to facilitate the ‘collapse’ of the wave function. Here, there is a large degree of arbitrariness regarding what you consider the ‘measurement apparatus’ to be—basically, you can move the ‘cut’ between the system and the apparatus around (which is referred to as the ‘movability of the Heisenberg cut’).
On the many worlds-theory, decoherence leads to the emergence of macroscopically distinct branches of the world, in each of which the particle is observed at a certain point. On the QBist interpretation, wavefunctions just quantify the knowledge we have about a given system, and the collapse is as unremarkable as the collapse of the probability distribution ‘50% heads and 50% tails’ to any particular value once the coin lands.
Other ‘interpretations’ modify the quantum description more radically. So-called ‘objective collapse’-theories have the system randomly enter a certain, definite state, with a frequency governed by some appropriate parameter connected to ‘macroscopicity’ (such as mass), such that macroscopic entities are virtually always in a definite state. This doesn’t really interpret the quantum formalism so much as propose an alternative to it, with in-principle observably differing predictions. A recent formulation is a modern successor of the ‘consciousness causes collapse’-theory, which doesn’t use mass, but a quantity related to the integrated information of a system as a collapse-parameter, with the postulate that integrated information is characteristic of conscious systems from Tononi’s integrated information theory.
So, what counts as observation is largely interpretation-dependent. On some interpretations, an observation is a physical interaction like any other; in others, observers and ordinary physical systems must be described in a fundamentally different way (although observers can themselves be ordinary physical systems relative to other observers); in yet others, observations are made whenever a system interacts with something ‘macroscopic’ enough. Which of these, if any, is right is an area of active research.
And for completeness there’s
The way Bell described this far-fetched loophole was his assumption that the experimenter has “free will”. This is misleading, it does not really correspond to any common notion of free will. It is the assumption of random sampling. All of QM rests on the assumption that experimental results are a random sample, an unbiased representation of reality from which we can therefore make valid inferences about the nature of reality. But all of science rests on the assumption that experimental results are a random sample of reality.
It’s very difficult to make sense of this loophole as anything other than a conspiracy theory. That the universe is a simulation, and the entity running the simulation is messing with us. Nevertheless, a few people think we should take it seriously - Sabine Hossenfelder, for example. She has written a couple of papers on it which she claims propose experimental tests for superdeterminism. I won’t claim to understand them, but I don’t see how you test a theory which, if true, would appear to imply that no theory (including itself) is falsifiable.
Aaaand lots of that is another language to me and I suspect a fair number of others … yes I see the article I linked is specific: “ “The trio’s experiments proved that connections between quantum particles were not down to local ‘hidden variables’, unknown factors that invisibly tie the two outcomes together. Instead, the phenomenon comes from a genuine association in which manipulating one quantum object affects another far away.”
Specifically “local” hidden variables are ruled out anyway. Can you try again to explain what a “non-local” hidden variable could be or means?
To my general public understanding I can understand a big picture of our sense of cause and effect being dependent on our experience of the time dimension but that time doesn’t work like that on a fundamental level, that the future is as much as the past was. Which is one way I’ve heard it explained. And I accept that multiple interpretations many beyond my ability to understand exist.
Basically, it means nothing but that the values of such variables can influence one another across arbitrary distances instantaneously. In Bohmian mechanics, the wave function, and with it the equation guiding the motion of any given particle, depends on the positions of (in principle) every other particle the first particle is entangled with, which will generally be every other particle it has ever interacted with. So, change the position of one of these particles, change the wave function, change the guiding equation, change the motion of the first particle, instantaneously, across arbitrary distances.
I’ve tried to explain the relevance of this possibility for Bell inequality violations here, hopefully in layperson-friendly terms. The gist of it is that a Bell inequality is ultimately nothing but a necessary condition for a set of variables to have a joint probability distribution, i. e. for there to be a function that assigns a probability to each possible combination of values. If a Bell inequality is violated, no such distribution exists; one way for there to not be such a distribution is if the values of these variables influences one another. If we remove the systems far enough from each other, then these influences must travel at arbitrarily high speed, i. e., be non-local.
And in the real world, the cat doesn’t stay entangled exclusively with the radioactive sample. In the real world, a live cat and a dead cat will change the way that the box it is in interacts with the outside world. The “decision” that is made by the radioactive sample emanates out pretty much at light speed, so the experimenter and his friend are well entangled with the current state of the cat long before anyone opens a box or a door.
I see it this way. You don’t need consciousness to collapse the wavefunction, but you do need consciousness to care about the state of the wavefunction.
But keep in mind that when you change the downstream side of the experiment, it doesn’t actually change what you would see on the upstream side. All it does is change the information that you have access to, in that you can either use it to correlate the photons, which you can then use to find the interference pattern, or use it to tell which slit the photon went through, in which case the information you could use to find the interference pattern is lost.
Interesting lecture by Sean Carroll. I don’t agree with him about the many-worlds interpretation of quantum mechanics. The Schrödinger’s cat thought experiment assumes it’s possible to prevent observation over time for a system, with some kind of box. But if excluding a system from observation is not possible, and you can observe the cat the whole time, then all you’re doing is plotting the decay of a radioactive atom over time.
Not really, it sounds like you’re talking about hidden variables. Experimental results show that when a superposition exists, it does not just mean that an underlying definite state it is hidden. There exists no definite state until observed.
You need to get into the nitty gritty of the Bell inequality experiments to understand why this is so.
I thought hidden variables was how Einstein accounted for quantum entanglement. That’s not what I was trying to say. I’m just suggesting that superposition only exists if it’s theoretically not possible to observe the system over time.
But that seems to be framed the wrong way around. It is possible to not observe a system for a period of time, and for the observations that we ultimately make to prove that superposition is “real” (in the sense that the model works to predict the results).
I don’t quite get this form of argument. In fact I don’t know why superdeterminism even deserves a special name: it’s just plain old determinism, as best I can tell. Which itself is totally consistent with the “block universe” concept; that the future already exists in a sense and that the universe is really just a 4D chunk of stuff that abides by certain rules. Superdeterminism in a QM sense is just the idea that events in one 3D slice must follow certain rules to ensure that certain statistical properties hold on another 3D slice.
I guess it rubs people the wrong way that their actions would somehow be constrained in this way, but the way I see it, it’s not much different than being constrained by conservation of energy, momentum, etc.
The Bell Inequality experiments all depend on statistical analysis of the results. As I understand it, superdeterminism is saying that statistical inference is invalid because the results cannot be assumed to be an unbiased sample because…???
That seems to me a lot more than just determinism. It means that our universe is set up deterministically in a very careful way to make it look exactly like spooky action at a distance takes place.
Because we live in a universe where the laws of quantum mechanics hold? The universe doesn’t owe us anything; it doesn’t have to abide by any rules at all. As it happens, it has statistical properties consistent with a generalization of ordinary probability, where the 2-norm holds instead of the 1-norm (i.e., we have to square amplitudes to get the actual probability, which enables the cancellation required for interference patterns to form). That’s what experiment is consistent with and what we should expect until we see otherwise.
I’m not sure I understand this. If you chose to observe a system, but intentionally neglect periods of time, would you call what you didn’t observe a superposition?
Um… that seems like a non sequitur to me.
The evidence that the laws of QM do hold is based on statistical inference, as is all of science. Superdeterminism appears to be claiming that statistical inference is invalid, so it’s effectively saying that the laws of QM (in a sense) do not hold.
But I have learned from experience that if I’m disagreeing with you it probably just means I don’t understand superdeterminism correctly.
You give me way too much credit! It’s quite possible that I’m the one not understanding it properly. Or neither of us do
.
I think superdeterminism is equivalent to another “interpretation” I came up with when I first read of the Bell inequality: what if every particle contained a tiny local simulation of the universe that told it what to do? All the simulations would be started at the beginning of time and stay in sync forever, so the results could enforce any desired statistical property. That’s surely a type of “local hidden variable” theory.
Of course it’s even sillier than the other common QM interpretations but to my mind gave another loophole. And superdeterminism seems to be another.
I’m not going to say that statistical independence isn’t a serious loss. Just that:
- Ordinary determinism means it was never really true anyway, even in the classical realm.
- Every other interpretation has something equally bad.
- I don’t get why it should extend to all kind of other silliness, like mind control or making all of science useless. It works in a very specific way.
Just as with the usual interpretation of entanglement, what we have is a correlation between two things that can’t be explained if they were truly independent. But instead of the particles themselves having a dependency, it’s the measurements. On the other hand, is that really worse than postulating an extremely large number of universes, all existing at once, that collectively abide by the same statistics? Doesn’t seem like it to me.
Personally, it seems ok to assume that statistical independence holds, except in the case of entangled states, in which case we should expect that our measurements (even when selected by a “randomizer”) to be correlated in a certain way. It’s just one more filter applied to which universes we can find ourselves in.