In experiments, they first prepare a few photons or other particles in a “Schrödinger’s cat” superposition state, then, later, perform some sort of measurements to check how close they got to the ideal entangled state (like a Greenberger–Horne–Zeilinger state). If they screwed up the first step it will be obvious from the results.
She would say that that is a mischaraterisation of superdeterminism.
Here is a paper in which she provides a pretty solid commentary on her version of superdeterminism, and why it is useful, testable, and scientific. The name is a problem. People read a lot into it by trying to interpret the name.
Determinism tells you that which observables you measure at any given time is set, effectively, by the boundary conditions of the universe. Superdeterminism tells you that these boundary conditions are such that which observables you measure is determined by what the outcome would be. So superdeterminism needs to specify much more initial data, but doesn’t give any reason for why that data should be specified in just this way, thus substituting a far more complex hypothesis without offering more explanatory heft.
But the real problem with it is that it wreaks havoc with abductive inference. Consider a simple variable that can have either of two values, say ‘the moon is there’ and ‘the moon is not there’. Then take a second variable that determines ‘I look at the moon’ or ‘I don’t look at the moon’. Determinism would have it that the time sequence of both is fixed; thus, sometimes when you look, the moon is there, sometimes it’s not. This allows you to abstract away from the context of your looking, and propose a general hypothesis that ‘the moon is sometimes there and sometimes it isn’t’.
On superdeterminism, however, both time series are aligned such that, say, whenever you look, the moon is there, and when you don’t look, it’s whatever. You’re not sampling from the actual distribution of the first variable. But then, trying to abstract away from the context of your observation—trying to find a generally valid theory, in other words—no longer works: ‘the moon is always there’ is simply false. Now just replace ‘the moon’ with ‘Bell inequality violation’.
What the notion of superdeterminism thus entails is that you can’t formulate general theories about the world apart from the context of observation anymore. But here, things become incoherent: we have introduced superdeterminism only as a response to such a theory (quantum mechanics). Thus, accepting superdeterminism entails that we never had rational grounds to accept quantum mechanics, and hence, to introduce superdeterminism.
Of course, it’s always possible that it’s just God fucking with us…
Unfortunately, she fails to understand the criticism levied against superdeterminism, and ends up knocking over a straw man. She refutes the idea that superdeterminism ought to be rejected because letting go of statistical independence may yield false explanations; but that is not what the criticism is about: rather, the problem is that without it, we really have no warrant to propose any explanations, because we have no licence to conclude anything beyond the mere observational data at all. Thus, all we really can do is stamp collecting: every hypothesis collapses to a mere statement of observations.
Hossenfelder is a brilliant physicist, and on matters of physics and mathematics, it pays to listen to her more often than not. But as soon as she starts philosopher bashing, she tends to miss her target, because she simply fails to engage with the arguments on their proper terms. She doesn’t have the training, and never really bothered to acquire it; in general, her opinions on philosophy have much the same status as a geologist’s opinions on vaccination.
An outcome is just another name for an effect, and we call the initial boundary condition the cause. But cause and effect have no real meaning in a deterministic, reversible universe. We can just as well call the measurement the cause and the initial boundary condition the effect. Then, there’s no “conspiracy,” just a state evolution that has to make sense in both directions.
But that’s already the case in quantum mechanics. Why not the moon, too? If every time I observe it I find that it’s behaved as if Newtonian mechanics is valid, then I don’t have to take a position on whether it exists when I’m not looking. Likewise, when I observe an electron I expect it to show up in places that QM predicts, but when I’m not observing it I’m happy to say “whatever.”
Science has always rested on an unsupported foundation. We never had any reason to assume things like statistical independence, except that when we make that assumption we get theories that mostly work. Except when they don’t, and we observe weird correlations that require breaking some cherished assumption (or putting our fingers in our ears and going la-la-la).
We never had any reason to suppose we can come up with any consistent theories, for that matter. And yet we do, because it seems to work. Maybe tomorrow every current theory we have will become invalid. We have no reason to suppose otherwise except that it hasn’t happened yet.
Agreed. I find her snide remarks about “philosophers doing physics” mildly offensive. She doesn’t seem to realise that “physicists doing philosophy” is just as big a problem. And she makes some notable missteps. OTOH, I find her musings on these subjects interesting. It is worthwhile someone covering the ground.
I did know one philosopher (actually professor of the department) who was a renegade physicist. He was not to be trifled with. One memorable occasion saw him lock horns with Roger Penrose when Penrose was peddling his new Emperors New Mind book. Penrose was happy to tell us Computer Scientists what it was we did. That didn’t work out well either.
The problem is the specificity, not cause and effect. Most possible deterministic histories—time-series of possible values for both variables—will obey statistical independence (in fact, in the limit, all but a measure-0 set will). So superdeterminism singles out a highly specific, a-typical history from all possibilities, which is something we would ordinarily require an explanation for.
Additionally, it’s explanatory power is arguably smaller: while QM straightforwardly predicts a specific value for, say, the CHSH inequality, under superdeterminism, any value is possible, including values exceeding that possible in QM. Hence, we add complexity to the explanation, but end up explaining less.
QM might not assign a definite value to all observables at all times, but that’s a far cry from having the fact of your observations depend on the value of the observable. The former allows abstracting away from observation to a general theory; the latter doesn’t, because the behavior away from the context of observation fails to match that within it.
Science rests on a hypothetical foundation, but superdeterminism entails giving up on the possibility of hypotheses. All you can do is just restate your observations; anything going beyond that is unsupported.
Yes. Not that specialists will always be right on their topic, but I have to admit that I find the attitude that sees somebody barreling into a highly technical field, convinced they know better without any training, somewhat puzzling. I mean, if everybody else seems to be going the wrong way, you gotta at least entertain the possibility that it’s actually you…
I got a call from my wife while I was driving home the other night telling me to be careful driving home because she heard a radio report about someone driving the wrong way on the expressway.
“Honey, it’s not just one person!”
Obviously much of this discussion goes farther into the weeds than the likes of me can follow.
Some very basic and likely stupid questions.
Experiments start by entangling two particles. How is that done? It is implied by someone in this thread that any particles that have interacted with each other are entangled? That can’t be right can it? Then everything is subject to “spooky action”.
A group of photons travel at the speed of light, duh. Closer and closer to speed of light and other particles travel through time slower and slower. A hypothetical particle that exists faster than the speed of light would travel through time backwards. A photon does not travel through time at all?
Lastly, do String Theorists try to address how their models, inclusive of extra dimensions curled up, impact things like quantum entanglement?
That sounds like “the science is settled”. Which has always kinda bothered me. On the other hand, if I say light has mass, I have to entertain that it’s actually me.
Basically, you need a superposed state and an interaction which ‘cares about’ the states in superposition. Superposition is ubiquitous in the quantum world, so in general, if two systems have interacted in a way such that it leaves a mark, so to speak, they’ll be entangled. The famous cat is in a superposition of ‘dead cat’ and ‘live cat’, and if we suppose that Schrödinger upon opening the box will be either happy or sad depending, then after opening the system will be in a superposition of ‘dead cat, sad Schrödinger’ and ‘live cat, happy Schrödinger’ (neglecting issues of measurement), which is an entangled state.
The reason we don’t usually notice an phenomena related to entanglement in the everyday world precisely is its contagiousness: you can only tell if a system is entangled if you have fine-grained control over all the entangled parts. Even with two entangled particles, if you lose access to one of them, the other will just ‘look like’ it is in either of the states available to it with a certain probability (50% for a maximally entangled state with two degrees of freedom).
In a sense, the entanglement quickly ‘leaks out’ from a given system and becomes extremely hard—i.e. impossible for all practical purposes—to detect. That’s in fact one of the reasons quantum systems, to do anything useful with them (like quantum computation), need to be extremely well isolated from the environment.
Things are sometimes put that way, yes, but I don’t think it’s a useful picture. To think about things from another point of view in relativity, one usually imagines the rest frame associated with an entity, i.e. the frame in which it is at rest. Photons have no rest frame, so it’s difficult to say whether we should associate a point of view, and hence, a time it ‘experiences’ at all.
String theory does not modify anything regarding quantum mechanics. Basically, you can think of quantum physics as a sort of framework within which concrete theories can be formulated, in opposition to the framework of classical or Newtonian physics. String theory is a particular theory formulated within the quantum framework, so things like entanglement are basically just built in from the start.
Theories are often formulated first in a classical framework, and then ‘quantized’, to find an appropriate quantum version of the theory. This is also the case with string theory. One interesting phenomenon here is that the quantization process, in order to not produce inconsistencies, singles out the number of spacetime dimensions in this case; classical string theory can be formulated with different numbers of dimensions, but after quantization, only 9+1 dimensions are allowed.
That’s not what I mean. It’s just that, typically, the experts in a field have some reason to believe what they do, and one should first try to get a good grip on that before one tells them why they’re wrong. They are wrong, sometimes; but that shouldn’t be your default assumption, even if their view seems counterintuitive.
So I’m the experiments how, mechanistically, is that done? Just bouncing photons off each other or what?
Again stupid question maybe, rest frame is always relative to other things, yes? I’m at rest relative to the earth but in motion relative to the sun and so on. Speed of light is that speed though relative to everything…?
I’ve always thought the heart and soul of science is questioning the status quo. E.g. ask the experts (or wikipedia) if light might have a tiny mass. Learn why they think that’s not possible. Getting a “good grip” is the hard part.
You can dive deep into the nature of science. What is perhaps a significant problem with a lot of commentaries on science is that they take fundamental physics as the definition of science. This goes back to Thomas Kuhn and his notion of paradigm shift. He presented a view of science that was, well, IMHO somewhat flawed. Something of a failed physicist, he cast scientific progress as a mirror of what he perceived in physics at the time. One where there was major overthrowing of existing paradigms. Classical with quantum, Newton with relativity. Which is great. But the world progresses without such events. Indeed unless you are getting a free trip to Stockholm in response to your efforts it seems you are not doing science.
Sure, overthrowing a paradigm might be big news. But that assumes that the current paradigms are wrong. Kuhn doesn’t seem to provide much support for validation, unless it invalidates something. Validation is a cornerstone of science. Nor does he provide much support for practical science.
IMHO there has been a corrosive influence on a lot of science that stems from Kuhn. The requirement for “impact” in a huge amount of academic work is a tail wagging the dog. You don’t create a paradigm shift in a three year grant funded research plan. Expecting it is debasing the art. So everybody is tailoring their work to continually provide a stream of novel results. Often in hilariously convoluted ways. It isn’t good.
Well, photons are somewhat reluctant to interact with one another, so in that case, you can’t do it using linear optics (lenses and mirrors and such), but you have to introduce a nonlinearity. The most common way to create an entangled pair of photons is parametric fluorescence, where a photon is converted into a pair of photons, which (at certain angles) will be entangled. This is done by shining a laser at a birefringent crystal, such as BBO.
Generally, in quantum computation, one uses so-called ‘conditional’ or ‘controlled’ operations to create entanglement. The most commonly used is the ‘controlled-not’ (CNOT), which acts on two qubits A and B such that the value of B is flipped if A’s value is 1, and left invariant otherwise. How this is physically realized differs for the various architectures. In trapped ion systems (ions ‘trapped’ in place by electromagnetic fields), where qubits values correspond to various electronic states (e.g. ground state and first excited state for 0 and 1), you can drive the transitions using laser pulses tuned to the energy difference between the states. The coupling is achieved via the collective motion of a chain of ions: you apply a laser pulse to the first ion (qubit A) that results in the chain being put into a certain motion state, and apply a laser pulse to the second ion that changes the state of qubit B based on the motion state of the chain.
For an object in uniform motion, you can choose a coordinate system that has it moving at any speed (slower than light). There’s a unique such system where it’s at rest; essentially, this corresponds to looking at the motion of everything else relative to that object. Say you have two objects, one (X) moving left at speed v, the other (Y) moving right at speed u, then in the rest frame of X, Y moves right at speed v + u, and in the rest frame of Y, X moves left at speed v + u.
This is only true if both objects move much slower than light; once you take that into account, velocities must be added such as to not exceed that. The addition law is changed to:
u^\prime = \frac{v + u}{1 + \frac{v\cdot u}{c^2}}
If you now want to know the speed of a photon relative to some object moving at a\cdot c, with a < 1, you get:
u^\prime = \frac{a\cdot c + c}{1 + \frac{a\cdot c\cdot c}{c^2}} = \frac{(1 + a) \cdot c}{(1 + a)} = c
So in every frame, the photon will be moving at c. So if you want the speed of light to be the maximum attainable speed, you get that it’ll also be the same in all frames of reference.
I looked at the Hydrodynamic quantum analogs section of the double-slit wikipedia article. I have a question:
If you used a water wave instead of light, and the back plate detected individual water molecules, the double-slit experiment would show a wave-like interference pattern, detected as dots (individual water molecules). If you observed which slit an individual water molecule went though with some kind of flow meter, that observation would disrupt the wave harmonics, and the interference pattern would disappear. So the back-plate detector would show a more particle-like distribution.
If someone would explain where this analogy breaks down when applied to quantum mechanics, I’d understand the double-slit experiment better. Thanks
One easy difference: the two-slit experiment works even when you send through one particle at a time.
Even if you send one water molecule at a time through the slits, wouldn’t that still show an interference pattern on the back plate, unless the flow meter disrupted the harmonics? Why wouldn’t the individual water molecules still show characteristics of the wave?
The analogue isn’t about water molecules or waves in water. It is about water droplets that can be made to stay intact and to propagate, driven by a periodic force.
Waves in water work differently. Molecules don’t propagate through the slits and diffract. The wave propagates. The water molecules move about around a fixed location in a periodic manner that matches the wave. A molecule exactly in the slit will move back and forth slightly, but does not propagate to the detector. Basically this is a classical wave.
There can be phonons - particle like quantum phenomena propagating. Like photon but for sound. That is a whole different question.