Zeno/planck to multiverse

MODERATORS - please move as you see fit. Thanks.

DISCLAIMER: I am not a scientist, nor have I never played one on streaming video.

To the best of my understanding (appallingly limited as it is), Planck basically debunked Zeno’s notion of the infinitely divisible - i.e., Planck’s Constant, Distance, etc.

In short; there is no smaller than the smallest, and it has been quantified Planckily.

So if there is a limit to how small a thing may be - string, bing or thingamajig - then when we consider the possibility of multiple universes, is there an upper limit to the number of them there may be?

Aprons and hurricanes? Probably. I know one is about dividing and the other is about multiplying, one is micro - the other macro, etc. - even so, I think it is an interesting question.

I’m sure better minds than mine have given it the consideration they deemed it deserved - some of them may be denizens here.

Any thoughts?

There’s a lot of misunderstanding about the Planck units, mostly grounded in the fact that most people think we know a lot more than we actually do. To sum up:

1: It must be possible to somehow reconcile general relativity with quantum mechanics. Nobody currently knows how to do it, but there must be some way.

2: Such a reconciliation might involve a quantization of spacetime. It’s a reasonable guess, but we have no idea whether it’s true.

3: If spacetime is quantized, it might be in such a way that there’s some smallest unit of distance, and that all distances are an integer multiple of that distance. Some things that we do know are quantized work that way, like electric charge. On the other hand, there are also other things that we know are quantized, like energy levels in an atom, that don’t work that way.

4: If there is some smallest possible amount of distance, it might be somewhere in the general vicinity of the Planck length. At least, that’s the best guess anyone has. But nobody would be in the least surprised if we someday discovered that the smallest possible distance was half the Planck length, or pi times the Planck length, or something of the sort, and there’s no actual evidence that it couldn’t be something orders of magnitude different.

5: In any event, we know for sure that the Planck quantities aren’t all the smallest possible things. We know that the smallest possible angular momentum is exactly half the Planck momentum (AKA Planck’s constant hbar), but on the other hand, the largest possible speed is the Planck speed (AKA c, the speed of light). Other Planck units aren’t extreme either way: The Planck mass is about the mass of a bacterium, and the Planck momentum is about as much as you’ll have in a running housecat.

6: Even if the Planck length is actually the smallest possible length, randomness and probability show up a lot in quantum mechanics. It’s likely to be the case that an object’s position has some probability of being in one place, and some probability of being in some other place, such that the expectation value of its position would be a fraction of a Planck length, even though that’s not an allowed measurement for the position.

That’s an awful lot of “mights”, there. Really, all there is to the Planck units is that if you take the three most prominent constants of physics (Planck’s constant hbar, Newton’s constant G, and Einstein’s constant c), there’s only one way to combine them to make a unit of length, only one way to combine them to make a unit of mass, and so on. In other words, in Planck units, all three of those constants has a value of 1. Now, that’s quite often convenient for physicists, but just because it’s convenient doesn’t mean there’s any inherent truth behind it.

IMHO, aprons and hurricanes. The Planck constant, as already said, is purely theoretical and is derived from very specific characteristics of the physical world at the quantum scale, ultimately from the energy that can be carried by a photon. The idea of multiverses is just a class of abstract philosophical notion that is not constrained by physical laws as we know them and indeed can be said to transcend them pretty much by definition, and there are many different and competing theories. Hugh Everett’s “many worlds” hypothesis is one of them, and if correct, it means that every single instance of a quantum wave function collapse spawns multiple universes representing all possible outcomes. If one imagines the same thing continuously going on in each universe, that can be practically regarded as an infinity of universes.

Interestingly, the physicist David Deutsch argues for a physical proof of the Many Worlds interpretation. He argues that quantum computers get their power from qubits interacting with their counterparts in far more parallel universes than there are total atoms in this one, his favorite hypothetical being a quantum computer that uses Shor’s algorithm to factorize a very large number apparently using 10[sup]500[/sup] times more computational resources than actually appear to be present (there are probably fewer than 10[sup]80[/sup] atoms in the entire universe).

Thank you, Chronos. I am going to copy/keep your reply. I don’t post much here, but read frequently; and your posts/replies are at the top of those to which I pay attention and give considerable weight.

But, to my OP - do you have any thoughts, conjectures or wild ass guesses as to whether there might me an upper limit to the number of universes in the multiverse, if there is such a thing? Or are there just too many ‘ifs’ for the question to even engage your interest?

Wolfpup, you are right up there with Chronos in my Pantheon of Dopers to Pay Attention To ™ - and I appreciate your contribution to this thread.

Are you saying that there is no theoretical limit to the number of universes there may be?

I recall reading somewhere (might have been here), that for other possible realities to exist they would have to adhere to the physical laws we see in operation in this universe - if so, might that not put a cap on it?

Nobody’s even sure there is a multiverse at all. The “many worlds” interpretation of quantum physics is very pretty, and many people support it, but it’s a little like string theory: elegant as anything, but there’s no way (that we know of as yet) to test it.

If an entirely new “world” opens up every time a radioactive atom does – or does not – decay, then there are an awful lot of them!

Thanks, Trinopus.

I am aware that the ‘Many Worlds’/Multiverse idea is by no means universally (insert hearty guffaw) accepted, and the absence of evidence thereof renders my question moot in the extreme…

But I still wonder if, in times of quiet contemplation and martinis, those who dwell in the deeper regions of cosmology, may have give the idea some thought, and if so, what those thoughts might be.

Thanks again to all three of you; perhaps you have answered my question but I am not well-informed enough to get it. Will re-read.

One comment on the OP. Zeno was talking about mathematical processes; Planck is referring to physical ones. Two different realms of thought. Planck did not say anything about Zeno or math.

Greek thinkers on the divisibility of matter on the other hand would generally have agreed that a smallest possible physical particle existed. But you can find Greeks disagreeing about almost everything.

There are at least three completely unrelated sorts of scientific musings which could be described as a “multiverse”. The many-worlds interpretation of quantum mechanics is one of them, but that’s a particularly uninteresting one. First of all, it’s even more untestable than the string model: Testing the string model would (so far as we can tell) require ludicrously more resources than we have available, but it’s still a finite problem, and it’s always possible that some clever physicist might come up with a more accessible test. Many-worlds, however, has been mathematically proven to be a valid interpretation of quantum mechanics… which means that it’ll always give you exactly the same results as every other mathematically-proven-valid interpretation of quantum mechanics, of which there are quite a few.

Secondly, even if there is some philosophical sense in which the Many-Worlds interpretation is the “correct” one, it inherently doesn’t work if there’s any way at all for the multiple universes to interact with each other. And if we can’t interact with those other universes, what does it matter whether they “really exist” or not?

True, and I have no issue with any of your arguments. Although this point seems really the same as the first point – if “many-worlds” is untestable then it follows that we can also not interact with it; it’s the same point.

But just as a sidebar to this, I’ll bring up David Deutsch again, who is a bit on the fringe but is nevertheless credited with conceptualizing the quantum Turing machine and pioneering quantum computing, which to my understanding he did not so much to advance computing as he did because he saw a functioning quantum computer as a validation of many-worlds.

To date we have no really powerful quantum computers (with some skepticism as to whether systems like D-Wave are truly quantum computers at all) and certainly no practical ones, but what if we did? This is Deutsch’s challenge, from his book The Fabric of Reality, and it’s truly intriguing:
To those who still cling to a single-universe world-view, I issue this challenge: explain how Shor’s algorithm works. I do not merely mean predict that it will work, which is merely a matter of solving a few uncontroversial equations. I mean provide an explanation. When Shor’s algorithm has factorized a number, using 10[sup]500[/sup] or so times the computational resources than can be seen to be present, where was the number factorized? There are only about 10[sup]80[/sup] atoms in the entire visible universe, an utterly minuscule number compared with 10[sup]500[/sup]. So if the visible universe were the extent of physical reality, physical reality would not even remotely contain the resources required to factorize such a large number. Who did factorize it, then? How, and where, was the computation performed?
The trouble is, of course, that this actually has to happen before we can be forced to try to explain it.

Just curious what the other two are. My guess for one of them would be the vast realm of our own universe that is causally isolated from us because of the expansionary phase of the Big Bang. But I’m not coming up with a third.

My guess as to David Deutsch’s challenge about Shor’s algorithm is that he is not taking into account noise. He is asserting that a quantum computer will, in a single step, be able to apply the Quantum Fourier Transform to the input and yield the correct answer. If it was able to do this, and do it reliably, then there would be a real question. But it isn’t going to happen. Noise, in all its forms is a limiting factor in quantum computing. The chance that the state vector for a 500 qbit calculation will collapse to the right answer in one try is fanciful, and nobody thinks it can. So you run the calculation many times, and look at the range of answers, and hope that it either gives you a statistical pointer - ie eventually after lots of runs you get a majority of the right answer, or you get answers that actually have lots of the bits correct and you can use them to reduce the search space (an option I rather doubt will happen.) The problem with sampling many times is that information theory gets in the way, as the number of times you need to sample goes up exponentially with the number of bits of resolution you are trying to resolve below the noise.

But presence of noise causes all sorts of problems. Decoherence of the entangled states being the most obvious and pressing. But also noise in that dominates the q-bit’s collapse. You may have ten zillion (2[sup]2[sup]500[/sup][/sup]) superposed states, that means that only the tiniest smidgen of noise is needed to give you serious problems.

My prediction is that no-one will be able to create a quantum computer that manages to conduct a calculation that allows David Deutsch to claim it is proof positive of many-worlds. The viable computing possible will skate perfectly along the bounds possible as dictated by noise, and the many worlds interpretation will remain as it is now - a valid but unverifiable idea.
A note on the OP’s other component. Zeno. It should never be called Zeno’s Paradox. It isn’t a paradox. It is a fallacy. The argument is flawed and trivially refuted. There is never any point in bringing Zeno to the table in an argument, as it is simply wrong.

Close-- It’s the idea of eternal inflation. The idea is that the inflationary state of the Universe is unstable, and that it’s natural for it to “tip over” and fall into a stable (or at least much closer to stable) state like our Universe… but that such a stable configuration can only grow at the speed of light, meaning that there’s always more inflationary space than non-inflationary space, because the inflationary space is growing faster than any universe can expand into it. So every time the balance gets upset in any one spot, that spot grows into a new universe. These universes would then really just be different locations in space, such that one could in principle point in a direction towards them, but would be so far away that, again, it’d be impossible to interact with them.

The third sort of multiverse is the brane theory one. Basically, this would be different spaces separated from ours along some higher dimension. Depending on the model, there could be one, two, or many of them, possibly an infinite number. These, you couldn’t point towards, but the exciting thing about them is that you might actually be able to interact with them. Under some models, gravitational effects can travel between the branes, and even absent that, there’s nothing that says that there fundamentally can’t be something else (what, we don’t know) that could interact.

As to Deutsch’s argument, he’s managed to describe quantum computers in terms of the Many Worlds interpretation. Sure, that’s no surprise: Quantum computers are a real quantum mechanical phenomenon, and so they can be described in terms of a valid interpretation of quantum mechanics. But they can also be described in other ways, in terms of other interpretations. Are those descriptions as simple as the MWI one? Well, maybe not, but again, it’s no secret that some interpretations of QM make it easier to set up some problems: For any given problem, you use whatever interpretation you find easiest.

Thanks, Exapno and Francis V for setting me straight re Zeno.

Chronos, what is the current consensus about brane among those with enough knowledge to have a meaningful opinion?

Thanks again to all of you for indulging me in this conversation.

Like most topics in fundamental theoretical physics, the consensus is “maybe”.

Your prediction is very plausible and also quite fascinating in the way it implies that noise might be nature’s way of hiding its most profound secrets. And yet, I’m not sure that Deutsch is really all that unaware of the noise problem. In fact he sees it somewhat in reverse, that you need to protect the environment from the quantum states in the computer and not the other way around (though it amounts to the same thing) because – according to Many Worlds – whenever a quantum state decoheres we split into multiple universes!

In any case, Shor himself and many others believe that the noise problem is solvable. As he said in an MIT lecture, “The same objection was raised to scaling up classical computers in the 1950s. Von Neumann showed that you could build reliable classical computers out of unreliable classical components. Currently, we don’t use many of these techniques because we have extremely reliable chips, so we don’t need them.”

Now, granted, quantum computing errors are far more frequent due to accumulated decoherence and other factors but many believe that they can be addressed through scalable quantum architectures with quantum error correction and fault tolerance. The same principles that Von Neumann elucidated could in theory be used to build arbitrarily reliable quantum gates provided only that some minimal threshold of reliability can be achieved; Shor cites evidence that quantum gates with noise of 1 part in 30 can be made fault-tolerant using these techniques, but with tremendous overhead, however actual gates may be 5 or 10 orders of magnitude better than that.

To be clear, I’m not disagreeing with you, but rather, clarifying that to the best of my understanding there are no clear answers on how reliable quantum gates or how efficient quantum error correction can become.

Actually, Max Planck got into a big fight about it with Zeno of Elea. Planck won, but not by much.