Double slit experiment - quantum mechanics

Cheesesteak’s re-presentation of the question is good. The thing i was trying to understand is what is measurement. If you put the screen there but don’t look at it, does the wave function still collapse?
Or is it just further connections with other quantum systems, such that the universe is just one enormous set of standing waves?
Sent from my SM-G965F using Tapatalk

At a certain point, though, the universe can replace that noise with actual random noise. Like encrypted data vs. a sequence of true random numbers. Sure, with the encrypted file, the original data is still there in principle. But in practice it may be impossible to retrieve. No one would notice if you replaced it with true noise–at least not unless you ran the universe backwards, which seems to be impossible. Maybe this is a source of the arrow of time as well.

I dispute this. A single classical particle contains infinite information. If a particle’s X coordinate can take any real-number value, then it may take an infinite number of bits to express. But if all its relevant parameters are quantized, then the information is finite.

And in any case, the classical universe fails, at least for physics that resembles ours. Atoms aren’t stable without QM.

Perhaps exponential complexity comes naturally to the computational substrate that runs the universe. Preferring finite over infinite information seems obvious, but it’s less obvious that quantum computation is “easier” than classic computation in a general sense. This is getting rather philosophical, though.

Well, I look forward to seeing if “real” quantum computers can exist. If superposition doesn’t exist at the lowest levels, and is approximated by “dithering”, then quantum computers will never get better than a few hundred qubits. If it’s real, then we can build a million-qubit computer and solve problems that couldn’t be solved if you filled the universe with Planck-volume classical gates running at Planck-frequency clock speeds.

Of course, if the Universe really is a simulation, then most likely the deeper levels where quantum mechanics is relevant don’t even exist until and unless someone builds a device that looks at them.

In a word: yes :). This sort of thing is generally framed in terms of Wigner’s Friend. Wigner’s friend opens the box containing Schrödinger’s cat, thus discovering it (one presumes) either dead or alive. Hence, if the cat was earlier in an indeterminate state, the wave function ‘collapses’ onto one of the possible alternatives.

But now consider the point of view of Wigner, who models the whole situation from outside the (incredibly well-isolated) lab. To Wigner, the whole system, his Friend included, is just an ordinary quantum-mechanical system; in particular, there’s nothing special about the Friend opening up the box—it’s just quantum systems interacting with one another (that is, if you don’t want to go with Wigner’s own conclusion, which is that consciousness—human consciousness, it seems, if cats may be in indeterminate states—is the magic sauce that collapses the wave function). So if we earlier had a superposition of ‘live cat’ and ‘dead cat’, after the Friend looks, we will just have a superposition of ‘live cat, happy Friend’ and ‘dead cat, sad Friend’, rather than a determinate state of either.

The question is, now, how this fits together. That’s where the interpretations differ. Wigner’s own interpretation proposes a break with the quantum description: quantum mechanics doesn’t apply to conscious entities, and thus, once we look at something, it assumes a determinate state, probably out of sheer embarrassment. That’s in principle a measurable distinction to ordinary quantum mechanics—we could make a complicated interference measurement to check whether the closed lab is in a superposed state, or a determinate one, after the Friend’s looking. As such, the perhaps strangest interpretation has the surprisingly down-to-Earth benefit of being empirically testable—in principle, at least; in reality, superposing conscious entities is something likely to elude our experimental capabilities for the foreseeable future.

On a many-worlds view, on the other hand, things are perfectly simple: the superposition merely refers to two different worlds; the Friend just splits into a happy and a sad version, while Wigner hasn’t yet split, and thus, must describe the whole lab in a superposed state (although, as described above, the many worlds view has other problems).

Bohmian mechanics essentially brute-forces the whole thing: the cat is either dead or alive, the Friend either happy or sad, nature just conspires to hide the respective variables from us. Picture it as many worlds plus a label on one of the worlds, essentially saying ‘and this is the real one’. This has seemed like a great solution to a lot of philosophers, and rather fewer physicists. Naturally, beyond its ad-hoc extravagance, this interpretation also has some more technical problems.

Finally, objective collapse theories will just posit that the wave function eventually collapses under its own weight—if things get too large, or too massive, a quantum superposition just randomly and indeterministically collapses to a definite state.

Which one, if any, of these or the other answers so far proposed is right is, of course, an open question.

I think what you’re proposing goes along the lines of an objective collapse theory. The basic gist is, superpositions occasionally collapse, indeterministically, onto a determinate state in a certain special basis. For single quantum systems, this will occur unobservably rarely; but for every set of entangled particles, it’s enough for one to collapse in order to have the whole shebang get determinate. Thus, macroscopic systems will practically never sustain superpositions for any appreciable amount of time.

This sort of thing is intriguing, and is just about nearing the threshold of observability—as it predicts deviations from the dynamics of quantum mechanics, at some point the quantum description will fail to give accurate predictions. But you need a certain finesse to massage the parameters such that things don’t either collapse too often, or too rarely.

So does a single quantum particle, or wave mode. Not all parameters of a quantum system can only take on discrete values; position is just as much a continuous variable as it is in classical mechanics.

But the general point is that the size of the state-space in quantum mechanics is exponential in the number of elements. So a quantum state of n qubits needs 2[sup]n[/sup] complex numbers to be represented, while the classical equivalent just needs n bits. A three-bit state might be (101), while the general three-qubit state is [a(000) + b(001) + c(010) + d(011) + e(100) + f(101) + g(110) + h(111)], with a-h complex numbers (of infinite precision). So even though each system can only be (read: will upon measurement only ever be found to be) in one of two states, we need an ‘infinite’ amount of data to fully specify the state.

“Freeze the simulation! They’re building particle accelerators and well need to take a few meetings before we decide what the results should be: the best alternatives are wavicles and plum-puddings, but if you have a better idea, let’s hear it.”

To be clear, I’m not trying to make a statement about the simulation hypothesis, even when I mention things like the “computational substrate” or talk about minimizing work or lazy evaluation.

It’s pretty clear that digital information underlies the universe in some fashion–the Bekenstein bound alone proves this. But I don’t think it necessarily implies that the universe runs on an actual computer. And wouldn’t explain anything anyway, since then the question is what that computer runs on (is it computers all the way down?).

The laziness part may arise naturally in some way. Perhaps, since entropy is fundamental and increases information over time, interesting universes need some way to slow the rise as much as possible, and QM serves that purpose by requiring a minimum energy threshold to move a system into a new state. Or, maybe universes with different laws in some way compete for resources, and quantized universes outcompete non-quantized ones by being more efficient (and/or producing more daughter universes).

All extremely speculative, of course. But maybe less crazy than just thinking we’re running on some alien computer.

I’ll have to read up on these theories. I’m not sure they quite match what I’m suggesting, but perhaps they are equivalent.

I posit that superposition is real, but not to an infinite degree. The way in which the superpositions are distributed is slightly noisy. This noise smooths out the distribution, so that running many experiments across different particles doesn’t display any quantization artifacts. They probably wouldn’t be visible anyway, since the number of states could be high, but not infinite–10[sup]5[/sup], 10[sup]10[/sup], 10[sup]100[/sup], etc.

But one state of a particle can only be entangled with one state in another. Because the number of combinations increases exponentially as you add particles, but the total possible number is fixed, eventually you run out of slots. The universe has to drop some combinations because there just aren’t enough to go around.

You would almost never notice this in practice, because things don’t get entangled to that degree, and our instruments aren’t precise enough. But we’d notice with a big quantum computer. Certain operations like the quantum Fourier transform increase some amplitudes relative to others, and the whole system is highly entangled. But if most states have zero amplitude, because there just aren’t enough available slots, then this doesn’t work. The number won’t get factorized because only a negligible of qubit combinations relative to the total were analyzed. Even seemingly large numbers like a googol are nothing compared to the number of states among a million qubits.

Schrödinger’s cat type situations are not impossible but they are difficult. Even the tiniest break in entanglement wrecks things. A lump of memory cells in my brain has to have perfectly consistent entanglement with the surrounding cells for it to maintain macroscopically distinct superpositions. In practice, this fails–the scrambling effect of propagating noise quickly breaks any large-scale coherence.

I mentioned earlier that I am assuming quantized spacetime–LQG, CDT, etc. Some version of this seems very likely to be true. And my hand-waving interpretation above does not allow infinitely indivisible superpositions.

I’m not sure I exactly understand what you’re getting at, but taking this literally, it’s false: while there is something called ‘monogamy of entanglement’, this only applies to maximally entangled states—i. e. a particle can be maximally entangled with only one other particle, but can be less-than-maximally entangled with any number of particles. Indeed, that’s where entanglement theory gets really interesting, because it turns out that a particle can be entangled not just with a given number of other particles, but also in qualitatively different ways. So there are actually two different ways to have three particles entangled (the GHZ- and W-classes), and nine different ways for four! After that, it gets a bit complicated.

I don’t exactly understand what you meant by ‘slots’, but at least according to vanilla quantum mechanics, there are ways to entangle any number of particles with one another.

And maybe a quantum ‘particle’ is an event, rather than a particle? A measurement event. So an electron is just a measurement event of a probability function.

Sent from my SM-G965F using Tapatalk

What you are describing is a standard interpretation/explanation of quantum mechanics found in textbooks, but your phrasing is a bit awkward. A particle is not a particle? Between measurements it’s not there?

A single (quantum) particle can be in one, or a combination of several, basic states at the same time. The (wave function collapse) idea is that from the wave function you can calculate the probability of various results of any measurement you set up. This measurement does affect the particle (its wave function).

But then again, we need to define what, exactly, counts as ‘measurement’, and how large classical systems emerge—speaking only for myself, I don’t feel much like an event, but rather, like something with a continuous existence, give or take the occasional whisky binge. But in theory, at least, I can be just as much in a superposed state as any electron.

One of the big problems I see in these threads is that while physicists can try to explain their math using words, it’s not possible to do the reverse. But non-physicists use words and words alone to describe physics. That can’t be done. Only math describes physics. Unless you can give us the math behind “an electron is just a measurement event of a probability function” it literally has no meaning. Non-physicists never want to hear this, but I’ve never seen any way around it.

The well known physicist William Clinton described this best when he said:

I meant to get back to this one since I described my idea poorly before. Well, I’ll probably describe it poorly again (since it’s hardly a fully baked idea), but it’s worth another shot.

The goal: to remove all continuous variables from physical theory. I claim they violate the Bekenstein bound and furthermore are unaesthetic.

I’ll illustrate with a simple system of four particles and that we are only concerned at the moment with spin.

Let’s pretend for now that we live in only a barely quantum world: that each particle has exactly four equally-probable superposition states, which I call “slots”.

If the states are evenly distributed, as we might expect if each of them are subject to random thermal or other disturbances, then we might find that on average two of them are spin up and two are spin down. This won’t always be the case, but since we tend to only measure one state at a time, it’s hard to detect that there’s any quantization in superposition space going on. Indeed, we wouldn’t really know even if there were only one state if we leave entanglement out of the picture.

Let’s assume now that our four particles are entangled. What does that mean, here? I posit that each slot of each particle can only have one link to another slot on another particle. We can link to other particles, in a chain or network. But a slot on particle A only connects to one slot on particle B.

What does that mean for a quantum computer? In principle, our starting point for (say) Shor’s algorithm is to initialize the registers into a state where each of |0000>, |0001>, etc. are equally probable. But if each particle has only four “slots”, then only four of the states are possible (which ones presumably depend on fine details of how the state was set up). So while your QC will appear to operate, in practice most of the states are missing, and the state you do pick will be one of the more probable but wrong answers (that you are likely to get from quantum algorithms anyway).

Of course four slots are far too few and we would have noticed the incorrect correlations if that were the case. But we wouldn’t notice if there were 2[sup]64[/sup] slots, or 2[sup]1024[/sup]. I don’t think we could ever notice that, except in the case of a truly large scale quantum computer. That is, one that can factor (say) million-bit numbers into roughly equal primes.

How does this avoid the measurement problem? Well, as soon as we start making observations, our instruments entangle themselves with the target particles. And our instruments are not themselves in a coherent superposition (small bunches of particles may form short-lived superposition states, but it’s very unlikely to be persistent across the whole device). So once this entanglement happens, most of the slots end up being discarded as inconsistent; they thermalize and the entanglement is broken. So with very rare exceptions, the instrument (and our memories, etc.) only ever record one outcome. It’s not that it’s impossible to set up a situation where there’s more than one recorded outcome to an experiment; it’s just that it’s very unlikely to last very long as the entanglement networks propagate between particles.

I think to think more clearly about how these entanglement networks actually evolve over time. What I have in mind is that highly ordered state, as with the QC, are delicate to set up and prone to disturbance. The usual state of things is a kind of rat’s-nest of wiring, with links coming and going constantly.

OK, sure, replace the continuous variables with discrete ones. But which discrete ones, and with what values? Without specifying that, you’ve got nothing.

Generically, discrete spacetimes sit very poorly with continuous symmetries, such as the Lorentz symmetry of special relativity. You can get very strong bounds on the amount of discreteness by observing light that has traveled a very long distance; a breaking of Lorentz-symmetry would lead to a certain amount of dispersion, with light of different frequencies arriving at slightly different times. Current boundaries are far beyond the Planck scale, so the simple idea of getting the Bekenstein bound as just being due to the discreteness of spacetime within a volume is out of the picture (naively, the Bekenstein bound would suggest a smallest surface area of four Planck units, which suggests a discreteness on the order of the Planck length).

I think the most promising approach to explaining the Bekenstein-Hawking entropy actually depends on the existence of large-scale entanglement: in quantum field theory, the vacuum state is very highly entangled, and if you throw away a part of an entangled state, the rest you have left over has a certain amount of entropy, known as the entanglement entropy. For many natural systems, and I think generically in QFT ground states, this entropy, like the BH entropy, scales with the area of the surface between the two parts of a system—so if you ‘cut out’ a part of spacetime, the entropy scales with the area covering the volume you’ve cut out.

There are certain problems for straightforwardly identifying this entropy with the BH entropy, such as the need for an appropriate regularization, but there’s lots of interesting work being done on this issue.

I still don’t understand what you mean by these ‘slots’. Are these the states the system can be in—i. e. either particle can be in states (1), (2), (3) or (4)? Or do you mean that each particle can be in either of two states, which can be superposed in four distinct ways? Something like a(1) + b(2), where (a, b) can have four different values, instead of being complex numbers?

But multi-particle states can be entangled in qualitatively different ways. Take the two three-particle states (leaving out the normalization for convenience):
|W> = |100> + |010> + |001>
|GHZ> = |000> + |111>
They can’t be converted into one another using only transformations that act on the particles separately. A measurement on any particle of the state |GHZ>, no matter the outcome, will collapse the entire superposition, while a 0-outcome of a measurement on the |W> state will leave the remaining two particles entangled. How does that work in the ‘slot’-picture?

You could consider the whole thing as a hypergraph, instead of an ordinary graph, where more than two particles can be linked by an edge; the |W>-state would be an ordinary graph, with each two particles linked by an edge, where measuring one particle simply cuts its link to the other two, whereas the |GHZ>-state would be a hypergraph with a single hyperedge connecting all three particles. But then it’s not really obvious that you gain anything.

Again, what you describe sounds very close to something like GRW theory, where sometimes superpositions just randomly collapse.

It would actually simplyify QM if it didn’t have to deal with continuous variables as extra, more complicated, machinery needs to be added to ‘discrete’ QM to deal with them. Unfortunately though the effects of breaking continious symmetries are profound.

I thought I was pretty clear: my proposal replaces the continuous “superposition space” with a discrete one. Ordinary formulations of QM don’t rule out states with a probability of, say, exactly π/4. Or states with a probability of 1/2[sup]1024[/sup] (this latter case is necessary if we’re ever going to get truly powerful quantum computers). I gave a (very handwavey) alternative with a testable prediction.

There are of course other remaining continuous variables. I mentioned spacetime earlier. These already have a number of discretization efforts going on, such as Causal Dynamical Triangulation. I look forward to seeing if these approaches go anywhere.

Agreed. There are some mysteries here, and a naive chipping up of spacetime probably will not work. Perhaps the information is shared between all matter in a region of space: a few isolated photons do not come close to saturating the BB, and so their positions can be specified very precisely; much less than Planck scale. But get to black hole densities and each particle has fewer bits at its disposal. Things get “snapped” at a much coarser granularity.

I’ll have to read up on that.

It is in all of those states. From the particle’s perspective, there’s no preferred state. Each one is a definite state, and there might be many such slots that correspond to a given measurement. The goal here is that every probability ends up being a rational number with the slot count as the denominator.

In terms of entanglement, my thinking if you consider the “slots” of any pair of particles a and b to be the sets A and B, then there is a partial bijection between the two sets. There are at most N links between the two sets, usually fewer. But this is true for any two particles, so we’re able to have multiple particle sets.

This poses a problem. My bijective suggestion is no longer compatible with a system entangled in two different ways such as this.

I can see how your hypergraph approach works–but that invites exponential complexity again, since among a set of particles, each state of each one may be a member of an edge or not.

There may be a way to salvage my approach by giving up bijectivity for an NxN (not hyper) edge set. That is still not too bad, but I need to work out the edge rules. I drew some graphs and there is some hope but I need to think about it more clearly.

Some parts sound very similar, in particular the part about the particle entangling itself with the measurement device, which consists of many particles, ensuring that one quickly converges on a definite measurement. However, my thinking is that it is still deterministic, but the rules governing entanglement tend to scramble information in a way that the outcome appears random.

An issue with deterministic collapse is that collapse is non-local, so a deterministic process for collapse allows superluminal signalling unless that process is hidden, such as in a (non-local) hidden variables theory. I think that is the essential reason that in GRW theory particles spontaneously collapse stochastically.