Is the nature of space-time an argument for determinism?

That seems like a pretty impetuous assumption. We believe the past is the singular case that could have precipitated the present, but is that in fact the case, and can we actually know that? Given the uncertainty of QM, why should we assume that the past is any less probabilistic and uncertain than is the future? After all, the further back we go, the less precise our understanding of past events is.

If there might multiple possible event structures that lead to this exact moment, then there might be multiple event structures that lead beyond it. Perhaps cause/effect is clouded by perception and not truly determinable in either direction. Or there may be more than one dimension to time.

Stephen Hawking calls this concept “Euclidian space-time” and if it were true, it would certainly be proof of absolute determinism. The problem is that it isn’t a “fact” so much as a highly speculative theory.

The multiverse idea – and specifically Hugh Everett’s “many-worlds” hypothesis – is something quite different and equally speculative. It posits that quantum superposition resolves itself into all possible states with each occurring in its own universe, so every possibility actually occurs but the beings in each universe observe just one of them, a process called wave function collapse.

This thread in GQ, which is nominally about the universe being a simulation, evolved into a very good discussion about quantum determinism, thanks to the patience and expertise of Half Man Half Wit in addressing my numerous questions.

Well, I somehow thought that this was a generally accepted theory, and it aligns with my (not very thorough) understanding of relativity. But if both theories are highly speculative, is there another, more firm theory?

And I’ll check out the thread you linked to, but it’s late here, so that’s for tomorrow.

Semantic misunderstandings are the SDMB’s bread and butter! :slight_smile:

Isaac Newton originated or at least popularized the version of determinism that holds: if you could identify the position and velocity of every particle, you would then be able to know the entirety of the past, and predict, to perfect detail, the entirety of the future.

His idea would essentially put us all frozen in amber, unable to make changes. (But, then, he also believed in the soul, so probably would argue that we can “miraculously” make choices and have free will of a theological kind.)

But, yes, definitely, using the word “determine” the way you meant it, then the word “not” was correctly employed. Under this “frozen in amber” vision, there simply isn’t anything we can do to avoid the pre-destined future. But, paradoxically, we might be able to view it, or discover it, or deduce it!

(In roughly the same way that, say, solar eclipses are “predestined.” We can predict them far in advance…and there is nothing we can do to change them.)

I’ve heard this idea before, and I have to confess, I don’t grok it. I don’t quite see how two past events could lead to the “same” present. It strikes me as akin to “parallel evolution” in biology. Yes, we often find similar evolved structures, but they aren’t the “same.”

But…yeah, I do see the validity of applying quantum uncertainty backwards in time. I can’t know exactly where that electron was, any more than I can say exactly where it will be.

(Heck, we can’t even say where it is!)

I just don’t comprehend the mechanism of convergence. It’s too much like saying that a bloke has two fathers.

Thanks for the clarification. I’m not very experienced in philosophical conversations, but I know that definitions are crucial, so I’m glad that I finally got my point across.

Partly my fault, partly your fault, but mostly the fault of this magnificent language we’re using! English! What a language! Where flammable and inflammable mean the same thing! :smiley:

Philosophical discussions are best (IMHO) when dealt with as good old-fashioned sophomore year bull sessions. “The world will little note, nor long remember what we say here.” So…we have a little fun, try to learn a little, pop up a good quip now and then, and take a good dag at ignorance.

Well, I couldn’t have expressed it that well, but that’s what I had in mind when posting the OP (and of course to fight my ignorance, you know, because of that motto…). And I share your admiration for the English language.

That’s what I used to think, too – in fact I said as much in the thread that I linked in post #22 which I’ll say again is an excellent discussion of the subject (most of it on the second page). But I was persuaded by that discussion that there is a subtle yet profoundly significant error in that thinking. It’s the fact that quantum evolution may be completely deterministic (and there is apparently good evidence that it is) without violating either the uncertainty principle or the apparent randomness of wave function collapse.

I think a useful analogy is imagining a simulation of the universe running on a supercomputer. Many of the major macroscopic events occurring in that universe like the motions of bodies in space would be classically deterministic and simulated as such. Others, like quantum wave function collapse, would appear to be random and simulated by an entirely algorithmic pseudo-random number generator. To the simulated beings in this universe, these quantum events are entirely random, and while certain measurements can be taken of position or momentum, they can never have enough information to predict quantum evolution or retroactively project it backwards. But if we took a snapshot of the computer’s memory at some point in time and played it forward again, it would always produce exactly the same results, including the output of the number generator.

IOW, the concept of a completely predetermined future even at the quantum level is not inconsistent with the fact that such a future can never be fully knowable. Both things may be true, and indeed it’s likely that they are.

And the whole question of “free will” is just a tedious red herring. Free will is simply a subjective manifestation of sentience. It really has nothing to do with physical determinism. All the simulated beings in the above simulation example would go around making decisions and celebrating their “free will” even as the computer (the computer here being a proxy for the laws of nature) inexorably governs every single thing they do down to the firing of every last neuron in their brains.

This is the other variety of “hidden variables” idea. For example, that Uranium atoms don’t decay randomly, but, instead, there is some kind of mechanism, some sort of “fuse” that burns down slowly, telling the atom when to decay.

For that specific example, the decay of atoms, I don’t buy it, because it would require an inner mechanism of staggering complexity. It would have to allow for U atoms to “know” when to pop, with incredibly detailed precision, over times as long as many years, down to seconds. And every time a U atom is formed, the timer has to be set to some initial value. Yet it all appears perfectly random.

Maybe…just maybe…for the photon passing through the half-silvered mirror, an argument for some kind of determinism can be made – although the experiments certainly have every appearance of randomness. But the U atom? No way! Each atom would have to have the equivalent of a microprocessor, with built in clock, counting clock cycles until the moment of decay. Occam could shave every man on earth (and every woman too) without dulling his razor!

(I don’t recall the other thread, and, in any case, I know I’m out of my depth. But depth be hanged: there’s my two cents!)

That’s basically my position, though I didn’t quite understand your explanation for it. Maybe I have to read the other thread to get it.

Right, the question of free will has nothing to do with my OP. If the universe was indeterministic, the agents for uncertain outcomes would follow from quantum effects, not from sentience, I suppose. Though the assumption of physical determinism destroys the concept of free will, there are other arguments against it that work just as well.

But clearly such a mechanism does exist, because the atom does decay at some particular time. Declaring it to be “random” doesn’t refute that in the slightest. In the final analysis calling an event “random” is really just saying that we don’t have enough information or sufficient understanding of its underlying causation to be able to predict it. This is how we understand it in the classical context of, say, a roulette wheel. It surely does not mean that the roulette wheel isn’t governed by the laws of physics! It’s not much different than attributing an event to “magic” or “God” and it’s equally as ineffective a refutation of the existence of causative processes.

I disagree: calling an event random means (in this context) specifically that there is no underlying mechanism. You can have two identical Uranium atoms, and one decays in five seconds, while the other decays in four years. You can repeat the experiment a thousand times. Identical atoms…decaying at different times.

To claim that they are not, in fact, identical, is now your challenge. Where is the clockwork gearbox, the burning fuse, the clock chip that counts down the time? Such a contraption would have dozens of moving parts, at very least, and probably thousands. The quarks would have to have sub-quarks, and sub-sub-quarks, and all of these would have to interact in such a way as to keep track of time, against a previously set “expiration date.”

There is simply no evidence for this whatever.

There is no “magic” or “god” in this explanation; it is nothing more than the best model for the observed facts.

If (hypothetically) U atoms that turn out to decay later happened to be slightly more massive than U atoms that turn out to decay sooner, then that would be powerful evidence for some sort of timing mechanism. But there has never been any such evidence.

My argument is that if there were no underlying mechanism, then the atom wouldn’t decay. The claim of randomness isn’t a “best model” or indeed a model at all, it’s a literary device to say that we don’t know what the model is.

A number of responses come to mind to answer that. I don’t claim expertise in this area but I believe the following are correct.

  1. The idea of some timer or “burning fuse” is based on an incorrect assumption. Radioactive decay for any given atom is not time-based in the sense that the probability that the atom will decay is always the same regardless of the age of the atom. There is no complex clockwork required because that’s not how it works.

  2. You can’t have some atoms of the same isotope heavier than others, but you can observe the decay of different isotopes. The existence of a systemic decay mechanism is supported by the fact that different radioactive isotopes decay at strikingly different rates. While the decay rate can only be expressed as a probability range, the ranges vary from inconceivably tiny fractions of a second to billions of years, and even vary a great deal among different isotopes of the same element.

  3. The existence of a systemic decay mechanism is additionally supported by the fact that external factors can change the decay rate.

  4. Fourth and probably the most important is that nuclear decay is just an example of the collapse of a probability wave function (the atom’s quantum superposition). If one accepts that quantum evolution is deterministic as appears to be the prevailing view, then so is the underlying mechanism of nuclear decay. But as already said, “knowable” and “deterministic” are not the same thing. Something can be unknowable yet still be deterministic.

I disagree with this just on the face of it. How can you claim that two atoms – nuclei with 235 hadrons comprised of 705 quarks – are truly identical? They are surely very similar, but I seriously doubt they could be called identical. (It may also be worth noting that those constituent quarks amount to c. 1% of the mass/energy of each nucleon, the rest lies in the binding mass/energy that holds the hadrons together.)

A [sup]I[/sup]H atom is a very simple atomic nucleus. A [sup]235[/sup]U nucleus is very obviously not. There is a lot of stuff in there, and it is moving around all the time. As far as we know, there are constant pion interactions between the protons and neutrons, or perhaps even some sort of W or Z exchanges causing protons and neutrons to mutually transmogrify around each other. And who knows what else. It is kind of difficult to examine what actually goes on inside large atomic nuclei.

It seems to me that the apparent randomness of decay, in concert with the basic statistical certainty of it, resembles some sort of evidence. An atomic nucleus is not just a lump of stuff, and the fact that a symmetrical proton+neutron count is typical of a stable isotope looks to me like even more evidence: that there is some sort of complex interaction going on between those two constituents.

Or we could descend into the depths of rabid speculation and speculate that the mechanism that regulates decay is not internal to the atom but is governed by this tremendous flux of unseen dark matter/energy that we are constantly being bathed in. Which would mean that these things which appear to be random are of truly unknowable causation, at least until we can discover a way to observe this allegedly pervasive stuff.

I really don’t understand where your objections are coming from, there are deterministic models of radioactive decay, that are just as general as quantum mechanics and they don’t involve “sub-quarks”.

In fact at a basic level hidden variable theories seem very sensible: we often see random behaviour arising in deterministic systems when we have incomplete knowledge of the deterministic state. It’s only when you more thoroughly investigate what hidden variables entail that they become less attractive; even so they are far from ruled out.

I don’t see how that follows. Different nuclei entirely. Apples and oranges. It doesn’t support either determinism or randomness. It’s either a different determinism, or a randomness with a different mean and SD.

Disagree: that merely suggests that the random parameters aren’t absolutely fixed. The randomness, for instance, might be “thermal” (in some way) so that, in a very hot environment, decay rates are increased.

News to me (I’ll repeat this below) that randomness has been removed from the Schrodinger wave equation. This is what I was taught (ever so long ago) when calculating such things as the probability of tunneling. It’s purely random. The wave equation (okay, the square of the wave equation) shows the probability of a particle existing at that location. It’s a random probability.

If there’s new stuff showing this isn’t so, I haven’t seen it yet.

One of the possible models I was taught, way back in the day, was that, due to quantum uncertainty, the nuclei in a U atom are always “moving.” They aren’t fixed, but always have random locations. Decay might happen – and this was only speculative when it was taught to me – when, by pure random chance, a certain number of the nuclei are beyond a specific radius.

(Imagine a circle with a whole bunch of cockroaches inside it. They’re constantly skittering around randomly. At some point, by pure chance, more than half of them have wandered outside that circle. It could happen soon…or it could take a long time to happen. The bigger the circle, the more “stable” the cockroach nucleus.)

(Sometimes, physics is just disgusting… :wink: )

What difference has been perceived? It’s the standard view that any two Uranium atoms (of the same isotope) are identical. They weigh the same, have the same number of particles, etc. (Okay, leave out ionization…)

There is no physical difference yet observed between the U atom that decays three seconds from now and the one that decays three years from now.

If the difference in decay times is said to be based on some physical difference between the atoms…somebody needs to demonstrate this.

A really cool notion, to be sure! I’d love to see it addressed formally and experimentally. It’s one of those revolutionary ideas I would want to be true!

I’ve certainly never heard any deterministic theory of nuclear decay. If this is a thing, it’s one I simply haven’t heard of yet. I’m certainly not gonna say, “There’s no such thing.” Just…news to me.

And…as I’ve been saying…if Uranium decay is deterministic, then what is the mechanism? It would have to be remarkably complex, to account for the wide variation in observed decay times. There would have to be some “inheritance” mechanism, where atoms are given their initial “deadline” for decaying, some other mechanism for remembering that time, and a third mechanism for counting off the time to know when h-hour has arrived. It’s grotesquely un-Occamic!

To clear this up, the Schrodinger equation describe the deterministic evolution of the quantum state (vector) and in standard QM this is then related to the outcome of measurements by a probabilistic rule.

Bohmian mechanics is deterministic and can do everything vanilla QM can do include provide a basic of theory of alpha decay. Other types of decay require extensions to QM/BM.

I don’t understand why you think it should have these features and/or they are particular complicated (for the reasons you give). What is mechanism for the conservation of momentum that allows a particle to remember what momentum it has? What is the mechanisms by which any system acquires the time for it to move from its initial state into some later state and how does it know to move into the later state at the appropriate time?

In QM alpha decay is explained by the wavefunction of the alpha particle spreading out past the potential barrier of the nucleus, giving the alpha particle a non-zero chance of being found outside of the nucleus when a subsequent measurement is made. In Bohmian mechanics the explanation is that the quantum potential will occasionally ‘lower’ the potential barrier allowing the alpha particle to escape if it has the right trajectory.

Relativity—notably, the lack of absolute simultaneity—has indeed often been used to argue for a fully deterministic world (most often called the ‘block universe’ in this context). The most famous of these arguments is probably the one due to Rietdijk and Putnam, who posit that two people, passing each other in the street (and thus, having a different relative speed) have different planes of simultaneity—i.e. different present moments—one of which will be to the future of the other.

That is, if you walk past me, at the moment at which we’re in roughly the same place, your present moment might include the deliberation of aliens in the Andromeda galaxy of whether or not to attack the Earth, while my present moment includes the attack fleet already on its way; thus, it would seem that the outcome of the Andromedan’s debate must already be determined, since after all, in my present, the fleet is already on its way.

Of course, the conclusion can be challenged in various ways—first of all, it might be the case that relativity, after all, is not the correct theory of spacetime—a future theory of quantum gravity, for instance, could in various ways negate the above conclusion. Second, it might be that as a theory, relativity is perfectly adequate, but it’s wrong to interpret it as a theory of spacetime—there is an alternative interpretation in which relativity is, in a sense, just an ‘emergent’ property of the universe, known as the Lorentzian interpretation.

This is somewhat bolstered by the existence of analogue gravity—systems that are from the outside goverened by Newtonian mechanics (or quantum mechanics), but which, for an ‘inside observer’, assume various characteristics of relativistic systems (systems with a finite speed of sound, where an internal observer is limited to communicate/observe using particles moving at this speed, can show various special or general relativistic effects). An inside observer in such a system might put forward a Rietdijk-Putnam like argument for determinism, but actually live in a universe with a unique arrow of Newtonian time.

In all of those other cases, the information is stored in the system.

But when you have two identical Uranium atoms…where is the information stored?

How does one U atom “know” it is destined to decay in two seconds, and its identical neighbor “know” it is destined to decay four years from now?

I also wonder – I think we’ve bumped up against this before – is “determinism” used differently in QM than it is in the classical sense?

Classical determinism, the Newtonian variety, is that the whole of the future is fixed, pre-destined, and unchangeable. The particles bump into each other like billiard balls, at exactly known angles and velocities.

Applying this to QM would mean that a particular Uranium atom has a given fixed “expiration date” when it will decay. It means that the photon going through the half-silvered mirror has some internal reason for passing or for being reflected. If it isn’t random, there has to be a mechanism. One photon would have to be different in some way from the next, with some internal property that says, “I’m gonna pass right through” or “I’m gonna be reflected.”

Unless we find eschereal’s idea to be true – the information is stored in the environment somewhere – a lovely idea! – then it seems that one of the two others has to be the case. Either the photon passes or reflects at random…or else one photon is not actually the same as another. Either the Uranium atom decays at random…or else there is a difference between one U atom and the next.

Why is the same answer not applicable? That it’s the properties of the individual atoms. “Identical” in structure doesn’t mean identical in state – even if those states are not knowable.