Continuous versus discrete time

Recently, I have been working with state machines and temporal logic for a class I’m taking. All of the models I have been working with treat time in a discrete sense.

Everyone knows that time is continuous; it is constantly changing. But I propose that there must be some period of time, which I will call an “interval of continuity”, small enough such that nothing in the universe changes from one of these periods of time to the next. We may not be able to measure this small amount of time (indeed, this would probably require a redefinition of the word ‘time’), but it would give the appearance of a continuous reality when in fact it would be discrete.

Doesn’t string theory have something to say about this?

After some more searching, it appears as if my hypothesis has already been refuted. Or does the simple existence of the tortoise paradox mean that there can be no interval of continuity?

I can’t be of much help, but try General Questions - they’ve answered some pretty good physics questions and can probably give you a decent idea of what the issues are.

Ah, you want to know if time is quantized..

From what I’ve been reading, it appears that a grand unified theory of physics would require some sort of quantization of spacetime. And since the consensus is that such a “theory of everything” does exist (even if it has not been discovered), it would seem that most quantum gravity physicists believe that time is quantized.

I thought that all of these paradoxes were cleared up by the invention of the calculus and the follow-on work to make it more rigorous.

If “the tortoise paradox” refers to one of Zeno’s paradoxes, as it probably does, then, although the standard line is that the calculus, by showing how to sum infinite series into finite values, resolved them, I would hold that, even before the calculus came along, they could easily be seen to be groundless. Why should it be problematic that, in the finite span between times A and B, infinitely many other times should lie, anymore than one would find it problematic that, in the finite span between the numbers 0 and 1, infinitely many other numbers lie [something people grasped well before the calculus]? One doesn’t need to know how to sum infinite series in order to understand that there is really no contradiction here, though that would certainly be a naturally related area of investigation.

As to the OP: “But I propose that there must be some period of time, which I will call an “interval of continuity”, small enough such that nothing in the universe changes from one of these periods of time to the next.”. Why? Why must there be such a period of time? Certainly, we could consistently describe/model a universe without such. Even it should turn out that such “intervals of continuity” actually do exist in our universe, that would be a particular contingent quirk of our physical laws, and not something conceptually intrinsic to a coherent notion of time.

In mathematical terms, you are talking about infintesimals.

Only if you tie that to the assertion that nothing changes over an infinitesimal unit of time.

(The orthodox mathematician could agree with that, insofar as he thinks 0 is the only infinitesimal. You might say “Well, let’s talk about less trivial notions of infinitesimal”. Fair enough. But if you stick to “Nothing changes over an infinitesimal unit of time”, you’re trivializing things anyway… Why not, in that case, just take all infinitesimals equal (and thus equal to 0)?)

One could propose interesting mathematical systems along those lines if one weakened “nothing changes” suitably. For example, in the smooth infinitesimal analysis of Lawvere et al., there are some nilsquare infinitesimals small enough to square to zero without necessarily being zero themselves; furthermore, over a nilsquare infinitesimal period of time, all change happens linearly (thus, no acceleration); one can’t even define a mathematical function, in these systems, which behaves differently.

This can be generalized, to give systems of analysis in which one has a hierarchy of infinitesimals, starting with {0} (all movement over this much time being constant), then widening to those small enough to square to zero (with all movement over this much time being linear), then widening to the slightly more inclusive set of those small enough to cube to zero (with all movement over this much time being quadratic [acceleration, but no jerk]), etc. However, these systems are pretty closely tied to, and best described using, intuitionistic logic, which for many is a radical and often non-intuitive departure from classical mathematical logic (in this framework, properties like “x either is or is not equal to 0” will often fail to hold; indeed, the only infinitesimal it will hold of will be 0).

Interesting though all of that is, these aren’t exactly what the OP was talking about, and I still can’t see why one would feel there had to be “intervals of continuity” (either in his sense or in the sense of this post); it’s perfectly consistent to imagine a universe with no such things. But perhaps I can be persuaded by the OP otherwise, or that one is best off speaking as though things were otherwise, at any rate, regardless of the ontological question regarding units of time.

But why should the underlying topological assumptions of Calculus (shameless self-promotion: a complete, ordered topological field) actually apply to the real world?

That’s a five-year-old paper. Why don’t we think that the problem has been solved and put aside?

Because there’s a huge breakthrough/explanation of time about once a week in the physics and philosophy communities. You can explain time in a thousand different ways, including the old favorite “it doesn’t exist, it’s just an illusion.”

When and if somebody properly explains time, we’ll all hear about it and you won’t have to drag up ancient articles from the net. Until then - assuming there’s such a thing as until, or then - nothing is settled and explanations tend to fall into the categories of interesting but unproven and nutball.

Does that Planck time have any relevance here?

If I understand the logic of the link from the OP, Lynds is claiming that even in an instant of time moving objects are “blurred” so to speak, so that you can’t assign them a classical fixed position. So even in an “interval of continuity”, the object isn’t changing it’s position so much as there is an irreducable uncertainty of it’s position. So the link doesn’t necessarily refute that time is quantized.

Well, despite there being no experimental evidence of a space/time quantization (though Q.E.D.'s link provides an interesting example of seemingly quantized galactic redshifts), there are some theoretical considerations in favour of it: if you want to observe anything (say, a particle), you can, for instance, do so by bouncing some photons off of it. Now, the higher the precision you want, the smaller the frequency of those photons needs to be. But, for a photon, smaller frequency means higher energy, until you’ll arrive at the point where the collision event (i.e. the act of observation) creates an energy density sufficient to create a black hole, thus setting an effective upper limit for the precision of measurement, somewhere around the Planck length. Similarly, one can obtain a lower limit for the precision of measurement for a duration (via energy/time uncertainty) around the Planck time.

Just because we can’t measure it, though, doesn’t mean it’s not there, obviously. But there’s other considerations that come into play here: since all inertial observers should be able to describe phenomena via the same laws, we run into trouble because of special relativity’s Lorentz contraction, enabling, in effect, observer A to describe phenomena indescribable to observer B just because they’re moving relatively to each other.

A workaround for that is proposed in the form of DSR, doubly- or deformed special relativity, which postulates the Planck length as an invariant minimum scale, in addition to c being an invariant maximum velocity in ordinary special relativity.

Well, that wasn’t exactly a miracle of clarity, but my time is short right now, so, to sum up: there’s no evidence nor a widely accepted theory involving a quantized space/time, but there are some considerations that make it necessary to at least think about the possibility.

From **Colophon’**s link:

Can someone help me here? Why should non-blurry images challenge the theory that Planck time is the smallest measurable unit of time?

As a semi-educated guess, I’d say that a discrete space-time ought to cause minor deviancies in the path of a beam of light, which, over cosmic distances, might add up to significant deviations, causing a blur in the images.
Absence of that blur means that that doesn’t happen, though I’m not sure if one can immediately conclude the non-existence of a smallest time scale from that.

All the discussion of physics is interesting, but I continue to call upon the OP to explain why they were personally led to feel the need for an “interval of continuity”, during which “nothing in the universe changes”, and why we shouldn’t just quotient out such intervals to size 0…

I should perhaps note that, by smaller frequency, I actually meant either higher frequency or smaller wavelength. :smack:
It’s a brainfart day for me today.

I suppose I could have been more clear. I guess a related question is: If there exists a period of time such that nothing in the universe changes, does that mean that time has actually passed?

I will clarify that I would like to believe that time is quantized, but I don’t necessarily believe that it has to be so (apologies if I gave this impression). I just like the idea that something discrete can be observed as continuous if the change from one time step to the next is small enough. This obviously works on a large scale (30 fps video is enough to give the illusion of continuous motion), but it would be interesting to know if it worked on a small scale as well.