Was that an intentional reference to the fact that it was indeed Alain Aspect who performed some of the first Bell experiments?
Thanks for the clarification. I was indeed confused, as I’m accustomed to relying on your explanations of such things. My mistake in misinterpreting your post.
It does annoy me, though, that most explanations of “quantum spookiness” are at the level of those unopened envelopes. You need a much more detailed explanation to bring the spookiness to bear. And those more detailed explanations usually involve at least trigonometry, putting them beyond the level that most folks can understand intuitively (whatever “understand” and “intuitively” mean).
That’s what I meant be “determined enough”.
It’s not 100%, but it’s close enough for our purposes, say 99.999999999%. If an alien civilization had computers quintillions of times faster than ours, then they would have no problem breaking it. Or, if quantum computing managed to find a way to factor primes, then it could no longer keep your data safe.
There are also possibilities that the encryption algorithm itself may have vulnerabilities. The primes that are used are not randomly generated, they are generated using an algorithm that is complex, but not necessarily unbreakable.
And it has not been proven that there is no easy way to factor primes, though it is most likely that there is not.
So yes, as you say, in theory, you could crack public-key cryptography, the difference is, you cannot, even in theory, crack quantum cryptography.
I think perhaps the confusion here (which I may have added to by misstating matters) is that intuitive “spookiness” is not synonymous with non-locality.
The Bell test experiments tell us that outcomes for distant entangled particles are correlated in a manner that forces us to abandon either locality or realism.
DeBroglie-Bohm pilot wave theory keeps realism, there is a definite hidden state that is revealed by measurement. But this requires that we abandon locality, somehow the hidden state of the entire universe updates instantaneously. This instantaneous non-local updating is certainly intuitively “spooky”.
The more common account (i.e. not Bohm) is to abandon realism. Non-realism means that definite states literally do not exist until they are measured, there are no hidden variables. Technically, we do not now require any non-local “updating” of hidden states to explain the observed correlations between distant entangled particles. But, of course, this still does not remotely accord with macroscopic intuition. Intuitively, how could these correlations exist without instantaneous non-local “communication” between the particles? Technically they can, but I would still call this intuitively spooky.
So neither account - local non-realism or non-local realism - gets rid of the intuitive spookiness. That’s why, until experiments proved it, Einstein once thought QM must be incorrect or incomplete.
Is that about right?
It occurs to me that you might be thinking about the ‘excess’ information transferred in the teleportation protocol—two classical bits are transferred, but to specify an arbitrary single qubit state, you need two continuous parameters, which needs a whole lot more information than just two bits (infinitely much, in fact). But there are two points here that need to be considered: one, you can’t actually get that information out by measurement—due to Holevo’s theorem, you only get one bit of information per qubit via measurement.
But moreover, you can also do this sort of thing classically: transfer one continuous parameter by sending merely a single bit of information. In fact, that’s just a one-time pad. So suppose Alice has some secret information—a single bit—that’s 1 with probability p, and 0 with probability q (such that p + q = 1). Suppose she wants to send that information to Bob. Suppose further Alice and Bob each have an envelope containing the same random bit, either 1 or 0 with equal probability (this is our ‘entangled state’).
Now suppose Alice opens her ‘secret’ bit. With probability p, she’ll obtain 1. Then, she opens her envelope. If she sees a ‘1’, she also knows that Bob will see ‘1’, and tells him ‘keep’. If she sees ‘0’, she knows Bob also has a ‘0’, and tells him ‘flip’. The end result: with probability p (a continuous real number), Bob will have the value ‘1’, and conversely, with probability q, the value 0, just as with Alice’s initial secret bit—thus, in this sense, a continuous amount of information has been transmitted by sending only a single bit. Nothing spooky about that!
I think so, although ‘intuitive spookiness’ is (perhaps necessarily) a bit vague. That’s ultimately the benefit of Bell’s theorem (and similar results, such as the Kochen-Specker or Leggett-Garg theorems)—it teases out exactly which of our intuitive requirements we need to drop. There’s also the phrasing of Abner Shimony, something to the effect that while quantum theory doesn’t necessitate ‘action at a distance’, it shows what one might call ‘passion at a distance’. Perhaps you mean something like that?
There’s this Mark Alford paper, where he defines this more carefully. I assume this is technically uncontroversial? So far as I can discern, my intuitive notion of “spookiness” precisely corresponds to (violation of) strong locality.
The violation of the Bell inequality is often described as falsifying the combination of “locality” and “realism”. However, we will follow the approach of other authors including Bell who emphasize that the EPR results violate a single principle, strong locality.
Strong locality, also known as “local causality”, states that the probability of an event depends only on things in the event’s past light cone. Once those have been taken into account the event’s probability is not affected by additional information about things that happened outside its past light cone.
https://arxiv.org/pdf/1506.02179.pdf
So we need to take care to distinguish locality from strong locality.
Okay, I am trying to go more for clarification than argumentation here, my knowledge is based solely on many, many hours of watching Leonard Susskind or Sean Carrol or the like lectures and reading on the topics. I do not have any sort of formal training or degrees in these topics.
My contention is that there are no hidden variables, and to clarify, when I say hidden variables, I mean a pre-determined state, similar to the envelope analogy. That the entangled particle that goes to the left is spin up, the one to the right is spin down, and we just don’t know that until we measure.
My understanding is that Bell’s inequality calls that into question, as you can statistically determine that those states could not have been predetermined, and he came up with this thought experiment specifically to contest Einstein’s belief that there were hidden variables.
So, if the particles are not already in a predetermined state, then when I measure them, they must in some way communicate between each other as to which state they are to take. That it happens instantaneously is what I understand to indicate non-locality.
That’s what I am pretty sure that I was saying.
Not a fan of that one.
I think that that is what I was saying as well.
Superdeterminism is truly horrible. I think it pretty much has to mean that our universe is a simulation, and it’s being run by assholes.
Yeah. If we’re talking about instances in QM where you can sort of send a message faster than light, the one that blows my mind is the delayed-choice quantum eraser, in which it’s possible to send messages back in time.
And it works…unfortunately the message is received as the combination of multiple interference patterns and looks like noise. Only when you deconstruct the pattern can you figure out what the message is saying. And deconstructing the pattern unfortunately requires information that will be received subluminally.
This is my understanding anyway, based on this PBS space time explanation.
This is also often glossed as ‘Bell locality’, and is essentially equivalent to the conjunction of realism and locality—both are taken to justify the assumption that the probability distribution for distinct events, conditioned on the hidden variables, factorizes. I’m not personally a fan of the nomenclature, as I find it somewhat misleading (a theory that doesn’t satisfy strong or Bell locality may be perfectly local, but not all of its effects have a ‘cause’ in the usual sense), but it’s common enough to talk that way.
Again, Bell himself was a proponent of hidden variables. His theorem simply shows that if there are hidden variables, they must influence one another superluminally.
It’s the other way around: if you assume that the particles are in some fixed, pre-determined state, then you can derive a bound on their correlations (which is a Bell inequality). That this bound is exceeded in quantum mechanics means that if there are pre-determined values, these can’t remain fixed throughout the experiment, and must hence influence one another.
If there are, on the other hand, no such fixed values, then you can’t derive the bound in the first place, and consequently, you don’t have to invoke conspiratorial influences for them to take certain values upon measurement.
In fact, the mathematics behind Bell’s theorem is ultimately very easy. I’ll provide a simple proof with some explanations in the following hidden part:
Proof of the CHSH-inequality with high school math
Suppose you have four observables xA, zA, xB, zB. For present purposes, it doesn’t matter what these physically are, just that they can take on the values +1 and -1, and that you can’t simultaneously measure x- and z-values. But suppose now that all of these possess a definite value at all times. Then, you can specify a probability for each possible four-tuple of values, like so:
x_A | z_A | x_B | z_B | P(x_A, z_A, x_B, z_B)
----------------------------------------------
+1 | +1 | +1 | +1 | p_1
+1 | +1 | +1 | -1 | p_2
+1 | +1 | -1 | +1 | p_3
+1 | +1 | -1 | -1 | p_4
+1 | -1 | +1 | +1 | p_5
+1 | -1 | +1 | -1 | p_6
+1 | -1 | -1 | +1 | p_7
+1 | -1 | -1 | -1 | p_8
-1 | +1 | +1 | +1 | p_9
-1 | +1 | +1 | -1 | p_10
-1 | +1 | -1 | +1 | p_11
-1 | +1 | -1 | -1 | p_12
-1 | -1 | +1 | +1 | p_13
-1 | -1 | +1 | -1 | p_14
-1 | -1 | -1 | +1 | p_15
-1 | -1 | -1 | -1 | p_16
From this, you can compute probabilities for individual outcomes, such as P(xA = +1) = p1 + p2 + … + p8, as these are all the possibilities that have ‘xA = +1’. You can also compute probabilities for joint events, for example: P(xA = +1, xB = -1) = p3 + p4 + p7 + p8.
Finally, you can compute correlations. A correlation is the expectated value of a joint event. As you remember from elementary probability theory, an expected value is just the sum of the probabilities for a set of events weighted with their value. So the expected value of a dice throw is 1*1/6 + 2*1/6 + 3*1/6 + 4*1/6 + 5*1/6 + 6*1/6 = 3.5. So, the expected value of xAxB, which I write as <xAxB>, will be:
<xAxB> = (+1)*(+1)*P(xA = +1, xB = +1) + (+1)*(-1)*P(xA = +1, xB = -1) + (-1)*(+1)*P(xA = -1, xB = +1) + (-1)*(-1)*P(xA = -1, xB = -1),
which, it turns out after some work, is:
<xAxB> = p1 + p2 - p3 - p4 + p5 + p6 - p7 - p8 - p9 - p10 + p11 + p12 - p13 - p14 + p15 + p16
Now, what does this quantity tell us? As I said, it gives the correlation between xA and xB: if it is equal to 1, then only the pi with positive sign are nonzero, and hence, xA = xB; conversely, if it is equal to -1, then both must be opposite in value. A value of 0 means that knowing the value of xA does not tell us anything about the value of xB.
We can now compute the following quantity—while arduous, it’s not technically difficult:
C = <xAxB> + <xAzB> + <zAxB> - <zAzB>
The result can be written in two equivalent forms:
C = 2 - 4*(p3 + p4 + p6 + p8 + p9 + p11 + p13 + p14)
= 4(p1 + p2 + p5 + p7 + p10 + p12 + p15 + p16) - 2
Now, every sum of probabilities is necessarily less than 1, which yields:
-2 <= C <= 2
Recall that all that we have assumed, here, is that there is a probability distribution such that in any given experiment, each combination of values for the four observables is present with a certain probability. Even if you haven’t bothered to check my sums here (which nobody would fault you for), this is the take-home message: whenever you assume that there are fixed values for the observables, you get a bound such as the above.
But now, quantum mechanics permits things to be arranged such as to yield a value of 2*sqrt(2) for the above quantity upon measurement (of, say, spin observables on a suitable Bell-state). Consequently, quantum mechanics falsifies the possibility of the above table to exist: we can’t claim that each four-tuple of values is present with a certain probability.
Now, the various options to reconcile experimental reality with this are the following:
- The probabilities aren’t fixed. Upon obtaining one value, probabilities for obtaining the other values must shift. In any concrete run of the experiment, this means that values themselves must change, as otherwise, they’d have been selected from the first moment with a certain probability. If we then ensure that observables measured in a single run are far apart from one another—by associating the observables indexed with ‘A’ to one particle, those indexed with ‘B’ to another—this entails a failure of locality.
- There is no such table. That is, there are no pre-defined values for the observables: then, you naturally don’t get the above bound for the correlations. Thus, there is no need to stipulate that there is any influence between the outcomes: there is no bound in the first place.
- We don’t get a fair sample. That’s the ‘superdeterminist’ option: things conspire such that we, for instance, preferentially perform a certain measurement if a certain value is present, leading to a higher estimate for its probability. Then, the bound can be violated, as well.
- Experiments don’t actually yield either one outcome or the other. That’s the ‘many worlds’-option: Each possible set of measurement outcomes is realized in a certain branch in such a way as to violate the above constraint.
So, that’s the thing: you need non-local influences to change predetermined values, if you want to violate the Bell-bound; but if there is no Bell-bound, because there are no predetermined values, you need no non-locality to violate it.
Or, as SMBC proposed, it’s just God fucking with us.
Why do you think I capitalized it? I was hoping someone would catch on.
Even God cannot factor a prime. But your real point is well taken. It is already known how, in principle at least, a quantum computer can factor a composite number. Of course, if it was a product of two 10,000 bit primes, you would apparently need that many qubits to do it.
You’re skeptical that 3 is a factor of 1? Heretic!
But how can it not be recursive? They are not assholes by choice but because they had to be.
So it’s simulations all the way down?
To respond far more seriously than you might have wished…
The question of superdeterminism has nothing to do with free will. No scientific concept has anything to do with free will, because free will is literally nonsense - it an incoherent concept. Everything that happens is attributable to some combination of deterministic and non-deterministic (truly random) factors. Things happen either for reasons or they are random. And while we obviously have strong intuition that “free will” means something sensible, nobody can really define it sensibly - because it does not comport with either determinism or randomness.
So, I know that definitions of superdeterminism talk about “free will”, but those are stupid definitions because “free will” is not defined. Superdeterminism is the idea that random sampling does not work. And if random sampling doesn’t work, that undermines pretty much all scientific knowledge.
Superdeterminism is a theory that the universe is a conspiracy. The universe certainly looks as though random sampling works, it’s the entire basis for developing generalized models from empirical data, and for all subsequent experimental testing of these models. And clearly this has produced vast amounts of incredibly useful and practical scientific knowledge. Superdeterminism says that the fact that random sampling appears to be a valid path to knowledge is all a set-up. It appears to work for everything else in order to fool us into thinking that entangled particles can spookily communicate with one another instantaneously.
A conspiracy like this must have been set up by assholes. Calling them assholes does not entail any implication that they had any “free will” about whether to be assholes, because - once again - free will is nonsense. An asshole is just someone who does mean things because that’s the way their brain works. Just like everything else in all universes, the way their brains work is attributable to some combination of determinisitc and random factors. Free will is not cause and effect; free will is not rolling a dice.
Only in the physicalist worldview, which is itself an unprovable assumption.
This is true, but, in practice, the behaviour of humans is astonishingly predictable. In any case, when talking about the behaviour of electrons and photons, I find it less confusing to stick to linear algebra and functional analysis than to worry about whether there is a conscious (whatever that means) observer, whether she has free will, and so on. The particles don’t care…