Math dopers - refresh my memory on limits

The last time I did any calculus was eleven years ago (!) and I feel kinda stupid for not immediately getting this:

I understand the basic concept of limits, and I read it as "as x approaches c, the limit of the function of x is the function of c.

I don’t get it. I took a quick look at the wiki page on limits in a vain attempt to bring it all back together, but I fear I’m not as smart as I used to be. Please, math geeks, make me smart again!

It’s not really a calculus question.

You can easily construct functions where it doesn’t hold true. E.g. F(x) = { x>0? x + 1 : (x<0 ? -x + 1 : 0)} and set c = 0 and thus the limit of f(x) as x approaches 0 from both sides tends towards 1 but of course f(x) = 0.

A more salient example of course is f(x) = 1/x.

In terms of the broader point it’s making I suppose you’d have to see it in context, I get the feeling it’s practically metaphysical in nature, but it could just be a twee thing like I just illustrated… it all comes down to continuous functions in the end…

I don’t think I’ve seen that particular concept expressed with limit notation before, but I think it’s saying that as c approaches x then f© will approach f(x). That wouldn’t be true if f(x) was, say, “generate a random number using x as a seed”, but it would be true for saner functions like f(x) = x^2. So, the point is, in a perfect world, things are nicely predictable.

Well, there’s really nothing to get. It sounds from the OP like you might even understand what the image is saying, just not its significance.

The property expressed in the image is actually an extremely standard notion in mathematics, so much so that it gets a special name; it is usually described as “the function f is continuous at the point c”. Essentially, this says that if the input to f changes a tiny bit away from c, the output also only changes by a tiny bit.

The idea is that, in a perfect world, everything is nice, and everything is continuous everywhere, which helps make mathematical analysis tractable. Unfortunately, sometimes we choose to consider pathological functions with discontinuities, which makes everything messier and harder. (Particularly, it makes math homework about limits messier and harder, when you can’t just plug in to calculate “The limit of f(x) at x = c” directly as “f(c)”… I suspect this is what inspired the image)

(For what it’s worth, you can carry out calculus in a framework within which all functions are automatically continuous wherever they’re defined, if you like (e.g., smooth infinitesimal analysis, or other such things of greater or lesser radicality). It just isn’t usually taught or presented this way.)

As far as that goes, in intuitionisitic analysis, all functions are continuous.

More interesting is that every function that can be calculated by a computer is automatically continuous. This obviously requires a bit of explanation, but what does it mean that a function can be calculated by a computer. A slightly wrong (but simple) definition would be that for every positive integer k, there is some positive integer n such that n decimal places of the input suffices to tell you k decimal places of the output.

The reason it is slightly wrong is that there are some perfectly well defined numbers (in the sense that you can describe them arbitrarily well by a sequence of rational approximations) for which you don’t know even one decimal place. Strange but true. But think of it in terms of decimal places. If you know the definition of continuity, you will see that my description is equivalent to the old epsilon/delta definition.

The simplest familiar example of a discontinuous function is the postage function. In Canada, a letter up to and including 30g costs 58¢. Beyond that limit it is more. So you throw your letter on a postal scale and it hovers around 30g, oscillating slightly from quantum fluctuations above and below 30g, never quite settling. So how much postage should you put on it? The function is discontinuous as described but also impossible to compute.

On the other hand, there’s the notion in physics of a function being “well-behaved”, which means whatever you need it to mean in a given context. This usually means that the function is continuous (among other properties), but in other contexts, the Heaviside function (for instance) might be considered well-behaved, while continuous approximations to it aren’t. Heck, a physics “well-behaved function” might even be something like the Dirac delta function, which isn’t really even a function at all (or if it is, it’s a function with a very mathematically-peculiar range).

This is it: what you see in the linked image is the standard, Calc I definition of continuity. Intuitively, what it means for a function to be continuous at a point is that what happens at that point is consistent with what happens everywhere near that point. You can predict what f does at x=c from what f does nearby.

Hence, my best guess as to the intended philosophical/metaphysical point is what Greg Charles said:

I’m not seeing this. For example, Sign(X) only has three values, -1, 0, or +1. how can it be continuous?

Try and calculate Sign(X) for the particular X I have in mind. “Ok, tell me exactly which X that is?”, you ask. Well, I can’t specify all infinite precision at once, but if you tell me how many bits of precision you want, I’ll give you that much. I can tell you that X is between -1 and 1. In fact, it’s between -1/2 and 1/2. In fact, it’s between -1/4 and 1/4. (This is good; keep the questions coming, and just let me know as answers to similar questions about Sign(X) come to you.) In fact, X is between -1/8 and 1/8… (Can you say anything about Sign(X) yet? The user’s waiting…). In fact, X is between -1/6 and 1/16… (Still nothing? Beyond that Sign(X) is in [1, 1], I mean? Alright, maybe you just need to know some more about X.) I can tell you that X is between -1/32 and 1/32… (C’MON!)…

Turns out, Sign, as a function on unbounded-precision reals, isn’t actually computable.

Put another way: Suppose you want to know whether program P ever halts (the classic uncomputable problem). If you had the ability to compute Sign, it would be easy!

Just compute Sign®, where r is the real in [0, 1] which is computable to arbitrary precision as follows: start executing P on the side, and let the i-th bit of r be 0 if P hasn’t halted within i steps, and 1 otherwise. Note that Sign® will be 1 if P halts, and 0 otherwise.

So if you could indeed computably calculate Sign, then you could computably decide the Halting Problem. And, conversely, since you can’t actually computable decide the Halting Problem, you can’t actually computably calculate Sign [in full generality; of course, for either of these tasks, you can write programs that can get it right for many particular instances. But the recursive program “Run your halting-determiner on myself, and then do the opposite of what it says I will do” will always stymie you…].

In other words, “computable” doesn’t mean “what a computer can do”. What “computable” really means is, “what one particular theoretical idealization of what a computer is, which is only approximately the same as real computers, can do”. All real computers can easily calculate Sign(), since all real computers have limited precision. It’s only theoretical computers which have infinite precision which have trouble with it.

No, real computers have trouble with it too… Do you know any real* computers that can compute Sign of arbitrary real* numbers? [*: No relation]

Yes, for practical purposes, it is rarely necessary to bother with real-number (i.e., unbounded-precision) arithmetic, and one can and usually does work with some other form of arithmetic instead. But this just you choosing to look at a different problem, not the machine being changed to have different capabilities.

Both theoretical and real-world computers can easily calculate Sign as a function from fixed-precision values to fixed-precision values, but that’s a continuous function anyway (its domain is discrete). And both theoretical and real-world computers cannot calculate Sign as a function from unbounded-precision values to unbounded-precision values.

(Indeed, even with real-world programming, we still see the remnants of these problems, in the admonitions that good programmers shouldn’t be testing floating-point values for exact equality and such things. Sure, you can do it, but it will be riddled with numeric instabilities in just such a way as to remind you how it would break down entirely if you switched to unbounded-precision…)

The only difference between a theoretical and a real-world computer, so far as this sort of thing goes, is that you never run out of memory on a theoretical computer, whereas on a real-world computer, you may eventually, in the middle of a giant calculation, need but be unable to purchase any more external hard drive space to serve as further virtual memory. But that’s not going to give real computers any less troubles than theoretical computers.

I would say that there’s an awful lot to do with arbitrary real numbers that is uncomputable, just because there’s so frakkin’ many of them hiding everywhere you look! Leave me the rational approximations, and keep the arbitrary real numbers. :smiley:

(Yes, I know, there’s infinitely many rational numbers everywhere you look too. Sigh.)

Just tell me the sign bit.

Well, we’ll need more than a bit, since there’s three possible signs here… :slight_smile:

Still, of course, if you happen to use a representation with an “Is this positive, negative, or zero?” field, the question is trivial. (Similarly, if you happen to use a representation of programs with a “Does this halt?” bit, the halting problem is trivial.)

But using a “Positive, negative, or zero?” field is actually a really poor way to represent real numbers! You can make Sign computable, but only at the cost of the rest of arithmetic (e.g., subtraction) uncomputable.

Imagine an instrument reading in analogue data in to greater and greater precision. It will not, in any finite amount of time, be able to read the data to complete precision, but will achieve arbitrarily good precision as you wait for it to sharpen. It will not be possible, in finite time, to ever be certain that the input is zero. If you demand a representation of reals with a “This is exactly zero!” bit, this instrument will never be able to set the, and thus it will not be able to produce a valid output when the input actually is zero. For most purposes, that’s not a very good representation of real numbers.

Similarly, if you have two reals X and Y, not known to complete precision in finite time, you will not be able to determine in finite time whether they are exactly equal. If you demand there be a “This is exactly zero!” bit, this will make subtraction uncomputable, as you’ll never be able to set that bit as needed for X - Y.

Computability depends on what one chooses as representation, in the same way that continuity depends on what one chooses as topology. But for most purposes, the best way to think of unbounded-precision reals is as an enumeration of the rationals which the real is less than and greater than (i.e., an open Dedekind cut), and with this representation, computability entails continuity with the standard topology.

Typos fixed in bold.

In plain-spoken English:

In a perfect world, if you’re getting closer and closer to where you’re going, you will finally get where you’re going.

In a perfect world, of course.

Said the turtle.

Well, the pineapple certainly didn’t get anywhere.