Here is a summary on Gizmodo of at least a portion of the model laid out in the book:
It puts forward the idea that because quantum entanglement exists, it suggests that both particles reflect the influence of some deeper level of reality, independent of spacetime. Like seeing shapes at opposite points in a kaleidoscope move synchronously, because they are different reflections of the same piece of colored glass.
Interesting.
I would assume that the inquiry into a deeper level of reality has been ongoing. Does a book like this frame a new approach, or it is more of an overview of what thinking is out there?
IANAP, but I think there is something in our universe that serves as the foundation for everything we see, experience, measure, etc. And this “something” is right in front of our eyes. We can see it, but we can’t comprehend it due to our limited intelligence. And it’s likely we never will be able to comprehend it.
It’s not a mainstream view if that’s what you’re asking.
Warning to the reader, I’m so out of my depth here that if I could breathe through the top of my head I’d still be sucking in water.
It looks (and I’ve only gone and read the link and some backgrounds on the two people mentioned in the link) more like a very high level discussion around the possibility that spacetime emerges from something. By removing the necessity for locality and causality we might understand that our perception of spacetime is superfluous to understanding nature.
Nima Arkani-Hamed has papers on how, without a spacetime background, you can get physical features to emerge from the math. From The Amplituhedron
It’s a fact that entanglement exists, and that it behaves in at least some ways completely contrary to what anyone can reasonably call “intuitive”. One can therefore say that there is in some sense a “layer of reality” that’s more accurate, or fundamental, or what-have-you, than what humans perceive. But it’s little more than wishful thinking to suppose that that fundamental layer of reality is in any way comprehensible to humans, beyond being able to do the math about it. Sure, it’s possible that if we knew more details about it, it would turn out to make intuitive sense again… but it’s also possible that no matter how much we learn, it’ll still seem fundamentally odd to us.
In most regards in physics, our intuitions are a very valuable tool, and so we tend to rely on them to a great degree. But that’s a property of us, not of physics. Our intuitions tend to match reality because we’ve evolved in reality, and those whose intuitions match it have been selected for by evolutionary forces. But that’s only any good for those situations in which we evolved, and the further you get from those sorts of situations, the less useful you should expect our intuition to be. Most quantum effects are completely imperceptible to a tropical grasslands ape, and so our intuitions are useless for quantum mechanics.
Yes, that is part of what I am asking. I am not surprised that it is not a mainstream view, but would be surprised if it wasn’t an area of current inquiry.
It almost sounds similar to what I have read about an hypothesis for the origin of gravity as a byproduct of something called, I think, “the holographic principle”
The holographic model looks good on paper, and it certainly includes some element of nonlocality. Whether it’s enough nonlocality to satisfy the EPR result, I don’t know. That said, though, the simplest versions of the holographic model have already been ruled out by experiments, which makes it a lot less appealing.
For a while there, there was a lot of hope in the holographic model, since one gravitational wave detector (which wasn’t designed to look for evidence of the holographic model, but which turned out to be quite well-suited for it anyway) had a bunch of unexplained noise in its data that nearly exactly matched what the simplest holographic models were predicting. So, of course, the researchers set about trying to find other explanations for that noise. And eventually they found it, and so the data aren’t consistent with that model after all.
There’s really no matter of ‘explaining’ entanglement using holography—in all the holographic models I know of, also the theory that ‘our’ (3+1)-d universe (not counting possible small extra dimensions) emerges from is a quantum theory featuring entanglement. In fact, it’s become an area of intensive research in recent years that there is an intriguing relation between space in the emergent holographic spacetime and entanglement within the boundary theory—in the most well-studied example, the AdS/CFT correspondence, the entanglement between two patches of the theory on the boundary (a conformal field theory, basically a special kind of quantum field theory invariant under scale transformations) is equivalent to the area of a minimal surface in the emergent spacetime. So in some sense, the entanglement degrees of freedom seem to ‘know’ about—or encode, in some sense—the emergent space.
In fact, people have studied the dynamics of this entanglement (using statistical methods; there’s an entropy associated to entanglement, basically related to the information about a state you miss if you neglect the entanglement, which obeys thermodynamical laws), and out pop the (linearized, since it’s only been studied approximately so far) Einstein equations for the higher-dimensional spacetime!
Of course, all of this so far only works exactly in cases that we know don’t apply to our universe, most notably in spacetimes with a negative cosmological constant (Anti-de Sitter space, which is where the AdS comes from), while ours is positive. There’s also a dS/CFT correspondence, but things here are even more conjectural.
Are you referring to Craig Hogan’s ‘holographic noise’ stuff here? If so, I think few people take that to have any bearing on holographic models as they are typically thought of—basically, he takes the argument that the entropy of a holographic screen is proportional to its area, measured in Planck units, to mean that hence, the emergent spacetime should have a higher ‘graininess’ than that, due to only being able to resolve, e.g., particle positions up to some certain scale. But this is at best very heuristic, and I don’t see how something like this would crop up in those few exact models of holography we have—in AdS/CFT, the emergent spacetime is governed by string theory/quantum gravity, which doesn’t produce such an effect (to the best of my knowledge).
As for Musser’s book, it’s definitely on my ‘to read’ list, but I don’t know when I’ll find the time; if I remember, maybe I’ll check back in to report my impression (not that I necessarily expect anyone to care).
Well, I’m not entirely sure what you mean by a ‘deeper level of reality’, and there’s not really a single unified approach as such, but it’s certainly something people study, and have for quite some time. The idea of pregeometry, that is, space (and perhaps time) emerging from some more fundamental degrees of freedom is quite old—Wheeler gave it that name some time in the 60s, but I don’t think he ever had a well worked out model (I’m not aware of one, at any rate).
But several of the modern approaches to quantum gravity feature at least some features of spacetime emergence—in the string-theory related AdS/CFT correspondence, a space dimension in the bulk emerges from the boundary theory; in some sense, in loop quantum gravity, the apparently smooth spacetime description emerges from loop variable degrees of freedom, yielding, for instance, discrete areas and volumina; in causal set theory, the fundamental degrees of freedom are discrete ‘events’ sprinkled into a manifold, obeying a partial order relation; causal dynamical triangulation proposes continuous spacetime emerging from tiny triangles glued together (which can be shown to naturally yield a description not entirely unlike our spacetime on large enough scales); quantum graphity proposes a graph-like structure to replace spacetime; and so on.
All of these are actively studied theories, some more mainstream than others, and all of them could be said to contain some form of ‘deeper level of reality’, at least in so far as that spacetime is not supposed to be fundamental.
If this is a well-written book that at least cogently addresses the philosophical rift between the quantum model and an intuitive desire to understand reality, I want to read it.
I mean, I doubt it has the answers - what are the odds? But if it’s not crackpot, and articulates the issues between a model that just says ‘it both is and isn’t until you observe it’ and our intuitive grasp of reality - and does that fairly, even if I’m eventually convinced reality is that a particle doesn’t have all its properties until measured - I think it would help me cogitate on the whole thing.
So, I guess what I’m asking is, does anyone know if it’s well written and not crackpot (even if not mainstream)?
How fun is that? Cool - thanks HMHW. Obviously I only get a % of that but appreciate the overarching point in response to my question. That next level of inquiry is fascinating but so hard for me as nothing more than a science fanboy to grasp. But the existence of quantum entanglement and other questions point to ways our theories will be further clarified/enhanced.
Bup - no clue. But for you, I’d check out Jim Holt’s book. I think it’s Why does the World Exist? An existential detective story. I started a thread on it I can check for.
By the way, today was a bit of a minor ‘dawning of a new age’ if you’re interested in quantum foundations—two papers came out on the arXiv, the physics pre-print server, conclusively demonstrating the impossibility of ‘local realism’—that is, the attitude that all of the properties of a physical system should be objectively fixed at all times, in conjunction with the idea that there are no faster-than-light influences.
The important thing those experiments did was to close two experimental loopholes, by which the correlations could, in some rather abstruse models, be made to look like they are in conflict with local realism, even though the underlying physics obeys it, simultaneously. Now, this has first been achieved back in August already, but the statistics of that experiment were rather weak; the two new experiments feature far more impressive data, reducing all wiggle room to (virtually) zero.
Unfortunately, this is often presented as a claim that ‘nature is non-local’; but that’s too quick. This only follows if one holds to the fact that all properties of a system are well defined at all times; it’s only upon joining this condition with locality that one gets a contradiction with the experiments. Hence, nature is only non-local if it is the case that all properties have precise and definite values at all times (which we just can’t get at experimentally, as in the case of simultaneous position and momentum measurement).
In particular, it’s strictly speaking wrong to say that quantum mechanics is nonlocal: since it doesn’t include definite values for all observables, it evades the constraint without a necessity for nonlocal interactions. But still, people—even professional researchers—do it all the time, which causes no end of confusion.
Hmm - it almost sounds like they are saying that the Heisenberg uncertainty principle always wins. I will have to see what I can read about this. Thank you for the heads up! Worth its own discussion.
Oh, sure, Heisenberg uncertainty always wins, and that’s one aspect of quantum mechanics that can be understood intuitively (if you train your intuition right). The mind-boggling question is just what the uncertainty means. Is it a limitation on measurements, or a limitation on the values themselves? One would intuitively expect that it’s just a limit on measurements, but that intuitive expectation, combined with a few other similarly-intuitive ideas, together forms a framework which is definitely in conflict with reality, so at least one of those intuitive ideas must be wrong.
I thought the whole “Quantum mechanics is the statistical version of an underlying deterministic theory” thing went out with Einstein. Do people seriously still believe that both parts of a conjugate pair have a value at the same time? What do they think the canonical commutation relation is?
For the peanut gallery: Spacelike separation is Special Relativity talk for “These events happened closely enough in time and far away enough in space that if either of them caused the other, it was due to some information traveling faster than light and, therefore, going back in time from the perspective of some observers.” If you think locality works, you think time travel is impossible, because you hold to the idea that causes should precede effects, dammit.
(The antonym for spacelike is timelike: “These events happened far enough away in time and close enough in space that everyone agrees which one happened first, so one event could have caused the other and causality would still work.”)
Anyway, most of the abstract is statistics-speak for “If local realism really is real, everyone in the research team is the unluckiest SOB on the planet” or thereabouts. They got p-values social scientists would commit genocide for; it seems pretty reasonable that these results are correct, and local realism is not. This is why I like physics: Your next experiment could destroy someone’s entire philosophy!
Well, local realism is definitely out: Everyone agrees on that, whether we like it or not. But so far as I know, there’s nothing in the actual science that says whether it’s the realism or the locality (or both) that we ought to give up. It’s pretty much just a philosophical matter, not a scientific one. And philosophically, if forced to answer, I think I prefer giving up locality over giving up realism, but I can’t defend that position in any logical way.
I’m the opposite: I think it’s more parsimonious to assume the Uncertainty Principle is deeply true than to assume Special Relativity is deeply false.
First, the Uncertainty Principle seems to fall naturally out of the simple fact conjugate pairs can be related by a Fourier transform. The fact the wave “analogy” extends this far means, maybe, it isn’t just an artifact of a formalism, but something deeper: That is, position and momentum can’t both be sharply defined at the same time for the same reason time and frequency can’t both be sharply defined at the same time.
Second, we understand why we don’t see macroscopic violations of realism, but we (or, at least, I) don’t understand why we wouldn’t see macroscopic violations of locality if locality were what we had to throw out.
I hope the next round of experiments will determine which of us is right. And, hey, if I’m wrong, superluminal signalling!