Quantized space (Physics)

Assume space is quantized as assumed in Loop Quantum Gravity (and perhaps other theories this is not a question about LQG per se).

Several questions:

Is there any reason that the size of a space quanta should bear any relation to the Planck length?

Is there any reason why the reduced Planck constant h/(2*pi) rather than h is used in computing Plank units?

Would a fundamental particle’s location be one specific quanta or would even these fill more than one spot, and hence not be a “point” particle.

Assuming the answer to 2 is yes: When a particle moves, can it only stay in the same quanta or move to an “adjacent” quanta. I assume so else it would travel faster than light?

If 4 is true how can the speed of light be equal in all directions? It doesn’t seem that 3-D packings of the quanta could work like this. That is you’d be moving along some directions via “n steps up and m to the right” leaving you a different distance away from your start than “n+m steps up”?

To the first two questions, the size scale of the quantization is a complete guess, aside from knowing that it’s really, really small. The Planck scale, or something in that vicinity, is just the best guess anyone has. And to the precision we’re able to make our guesses, it doesn’t really matter if you use h or hbar, but hbar is usually considered to be more significant in modern physics.

To the third and fourth questions, well, nobody ever said that quantization of space was easy. If the answers to questions like that were simple, we’d have it figured out already.

Thanks Chronos, as an undergraduate physics major, I always appreciate your answers to physics questions.

Regarding your one point about speed of light not being the same in all directions (point 4).

I think you are mixing up two things. Space could be quantized. But that doesnt imply that its actually layed out as some 3/N dimensional orderly arranged “stack” of 3D/N tiny cubes/whatevers if you get what I mean. Though it is hard to at least in my mind to seperate the two parts of the problem.

It doesn’t have to be laid out as stacked cubes, but the quanta have to be arranged somehow which would seem to me that some paths consisting of the same number of quanta would have different physical lengths.

Another possibly related question is: If space and time are quantized and the moves are discrete, I wonder how the particle “remembers” which way it was going between steps. It would seem to have to know more than where it is and where it was since the paths would have to be slightly crooked paths (on the Planck scale) that approximate a straight line. Since there are uncountably infinite number of straight paths through a point in Euclidean space, but only N entry points into a space quantum (where N is the number of quanta “touching” it), I don’t see how a photon “knows” which way to proceed. But then if I did, I guess I’d be a theoretical physicist.

In many of such models, space becomes ‘relational’: how distant one thing is from another is, roughly, related to how direct two events are connected. So if you visualize spacetime as a graph, in which events (‘spacetime points’) are the vertices, then the number of steps you have to go from one event to another determines how far these events are from one another.

Some models, for instance, have an ageometric phase, in which the spacetime graph is maximally connected: there exists an edge, i.e. a link, between any two events. In this phase, no ‘space’ in the ordinary sense exists, since everything is right ‘on top’ of everything else. Then, in a sort of condensation or cooling process, links between events get removed, and space(time) emerges.

There’s thus no arrangement of quanta as such (as this would imply them to be arranged in space), rather, the quantized relations between events give rise to space: the underlying graph is a pregeometric notion. It’s thus a bit of a misunderstanding to talk about the ‘size’ of space quanta, as in these models, the notion of size is emergent, and not applicable at the fundamental level.

As for particles, it is somewhat unclear how to integrate them into such models – one proposal I find promising is that they are in fact excitations of the underlying graph, similar to how quasiparticles, such as phonons, the quanta of sound, are excitations of an underlying condensed matter lattice. This requires typically adding some extra structure to the graph. These particles then have typically a position defined by their interaction with other particles, which in turn depends on the connections of the parts of spacetime they’re in with one another.

In general, position in discrete models is subject to uncertainty: since the coordinates don’t commute with one another, position along one axis is only precisely definable at the expense of defining position along another axis less exactly. So, particles would be at least a little non-local – which fits well with the picture of them being collective excitations of some underlying structure.

As for the speed of light, going by a condensed matter analogy, it is indeed possible that it is anisotropic – i.e. different in different directions. But since this anisotropy would apply to all physics, we would never notice: all the probes we could use would be subject to the same anisotropy, so the measured speed of light would be the same in all directions for an ‘inside’ observer, even though it might not be for an ‘outside’ one.

And for the importance of the Planck length, consider how our ability to measure smaller distances is related to the energy of the probes we use: the higher the energy, the smaller the distance. At some point, however, you end up cramming so much energy into such a small space, that you end up producing a black hole – which will ‘block the view’ to all smaller distances. The distance at which this happens is the Planck length – so whether or not it’s the smallest actual distance, it certainly is the smallest theoretically resolvable one.