Alright, spring break just ended. I think I’ve lost my ability to do abstract math. If F returns the antiderivative, could you please give values for f, g, and x which demonstrate that F is not local? Say X = [0, 1], or whatever you want.
Sure. First of all, my [symbol]F[/symbol] will return an, not the, antiderivative of a function. ( Earlier, I defined an antiderivative of a function f to be a function F such that, at every point x ∈ X, F '(x) = f(x) )
So, let X = [0, 1]. Let [symbol]F[/symbol] be the map of the set C(X) of continuous functions defined on X that satisfies
[ul][symbol]F/symbol(x) = ∫[sub]0[/sub][sup]x[/sup] f(s) ds, for every f ∈ C(X) and x ∈ X.[/ul]Let f and g be the functions in C(X) defined by
[ul]f(x) = 1,
g(x) = 2x if 0 ≤ x ≤ 1/2 and g(x) = 1 if 1/2 ≤ x ≤ 1.[/ul]Then f and g agree in a neighborhood of 1, but [symbol]F/symbol(1) ≠ [symbol]F/symbol(1).
Great thread.
Can we answer in any case why one mathematical operation is more difficult than another, though?
I think the whole local vs global argument is missing the point. In several senses, finding antiderivatives is dead easy: If we consider, say, continuous functions on an interval [a,b] then an antiderivative of f is found by taking the definite integral from a to x. If you want a more explicit formula and allow me to restrict to analytic functions, then take a Taylor series of f and take antiderivatives term by term.
I took the OP to be asking a different question: Given a function defined by a formula involving elementary functions, arithmetic, and composition, why is it difficult in general to find such a formula for its antiderivative? I.e., this is a question about symbolic integration. In general, it is not only difficult but often impossible to integrate symbolically in this sense. I suspect it’s very difficult to even characterize which functions are symbolically integrable.
So I would trace the difficulty to the complicated nature of such differentiation rules as the product, quotient, and chain rules. Thought of as rules for pushing symbols around, these are not easy to invert.
Tyrrell McAllister: Okay, so a true antiderivative returns a set of functions. You pick one element out of that set and say that it doesn’t match. I would probably go so far as to say that this misses the point of the distinction between definite and indefinite integration. I still think that the former fits my idea of global and the latter my idea of local.
I also realize that what you’re saying won’t work if you restrict it to well-behaved (infinitely differentiable) functions. Of course, for those, if f and g agree about any neighborhood of x, then they’re the same function.
All right, I went and looked at my book on real complexity. It turns out that the integral of a polynomial-time computable function is computable, and may be polynomial-time computable (if something analogous to P = NP holds).
The derivative of a polynomial-time computable function may not even be computable. The book actually says nothing about the time complexity of computing the derivative when it exists.
So in a certain very precise sense, derivatives are actually harder than integrals. That’s fairly counterintuitive.
That’s the whole point of complexity theory, but it’s gonna be a while before any good, general answers emerge.
Ah. I think that you may have me there with your last point about infinitely differentiable functions. At least my view will probably require modification in light of it. I’ll have to think about it. (The quarter has begun, so I may think slowly).
But I’d still like to respond to your first point. I really think that for the purposes of answering the OP, the difference between finding an antiderivative (that is, a particular function) and the integral (that is, a one-dimensional family of functions) is largely irrelevant. The question is “Why is integration hard?”. All symbolic integration that I’ve ever seen can be reduced to the following two-step procedure:
[ul]Step 1: Find an antiderivative F of f.
Step 2: Generate the set of all of the antiderivatives of f.[/ul] Once Step 1 has been comleted and a function F has been found, Step 2 is always trivial, because the set in question is always { F + c : c ∈ ℜ }, where ℜ is the set of real numbers. Therefore, all of the difficulty of integration lies in Step 1. If a reason can be provided for why Step 1 is hard, that reason will entirely account for why integration in general is hard.
Topologist, you say
How can you find this to be a satisfying answer? As far as I can tell, your answer to the question “Why is integration hard?” essentially reduces to “Because the process by which we integrate is difficult to carry out.” This is just restating the question with more words. I realize that the question has a sort of intrinsic absurdity that makes it difficult to know what would qualify as an answer. It’s sort of like asking “I can follow the technical proof that <arcane mathematical fact> is true, but why is it true?” What qualifes as a satisfactory answer depends more on human psychology than pure logic.
Also, you write
But the point is that taking the definite integral from a to x for general x is hard; it is Step 1 (using my notation above).
But taking the antiderivative of each term is hard, in the sense of the OP, so you haven’t reduced the problem to a “dead easy” one.
I want to repeat this, because I think it really strikes at the heart of the matter. From an algorithmic standpoint, derivatives are harder than integrals. Why it’s this way is still an open question, but there are many people around the world working on this and related questions in order to find a mathematical answer.
Note that this book is talking about numerical computations, not symbolic ones. So we do have a bit of a disconnect there, but I’m not convinced it makes the results mentioned above irrelevant. Even Tyrrell’s local and global distinction deals with the values of a function.
Also, I don’t think that that’s terribly surprising. The integral of a function over a set depends on the values of that function over the whole set. No surprise there. btw, Tyrell, your definition of global doesn’t address any modern notions of integration, which don’t generally require the domain of the function to be connected.
Bah. Sorry Topology. I retract my last point. Antidifferentiating a Taylor series term-by-term is certainly “dead easy”.
But, nonetheless, I’m not so sure that it reduces the entire problem of integrating a function to a dead easy one. I have two reasons for thinking this.
First, to extract the Taylor series of a function, you need to extract an expression for a derivative of arbitrary order. When this involves repeated application of product rules and chain rules, it can get pretty nasty. Sure, given enough computational resources, you probably can do it, but the same can be said about factoring integers.
Second, (and, to me, more importantly) even after you have the Taylor series of the integral, translating it back into a closed-form expression can be as hard as symbolically integrating the original function directly (or so it seems to me).
I also don’t require the domain to be connected. The neighborhoods mentioned in my definition of “local” are with respected to the subspace topology inherited by X from ℜ (in the cases that we’ve been discussing). Neighborhoods in disconnected domains makes sense in this context.
It it is true, however, that I’ve been restricting my attention to a relatively restricted class of functions: the continuous ones.
Ah jeez. And sorry for calling you Topology, Topologist.
I’m starting to think that the question isn’t so much, What is it about integration that makes it harder than differentiation? I’m starting to think the real question is, What is it about closed symbolic form that makes integration harder than differentiation in it?
One means of writing functions that I rather like is as the sequence defined by the coefficients of the Maclaurin series, times the respective factorials. That is, a[sub]n[/sub] = fsup/sup. So for instance, then:
sinh(x) = { 1, 0, 1, 0, 1, 0, 1, … }
This is very different from closed form, but for well-behaved functions, it carries just as much information, and it’s just as reasonable. It makes certain things like function composition hard as heck, but certain things like differentiation and antidifferention way easy. For instance, take:
x[sup]2[/sup] sinh(x) = { 0, 0, 6, 0, 20, 0, 42, 0, 72, … }
Taking the derivative of this function in closed form is not hard. Taking the antiderivative is rather harder. But doing it in this sequence notation is a snap. Just shift one space left or right:
Derivative: { 0, 6, 0, 20, 0, 42, 0, 72, … }
Antiderivative: { C, 0, 0, 6, 0, 20, 0, 42, … }
When viewed like this, I tend to think that differentiation and antidifferentiation are almost the same thing, and certainly the same level of difficulty. Yeah, I know there’s a C there, but really, that’s not a major difference - they still carry the same amount of information, total. And from a purely mathematical standpoint, I tend to think that looking at it like this is no less valid than looking at it in terms of symbolic closed form. From one point of view, integration is harder. From another, they’re the same. Even though I don’t understand ultrafilter’s fancy-schmancy complexity theory, it looks like it says that from another point of view, integration is easier. So I am skeptical that there is any purely mathematical concept (like global vs. local) that can explain the question in the OP.
I can now respond to this objection. The portion I’ve italicized is not true in general. There is a standard example of a function that is infinitely differentiable at every point, and which equals 0 at 0, but which is not the zero function:
Let g be defined on ℜ by
[ul]g(x) = 0 if x ≤ 0 and g(x) = e[sup]-1/x[sup]2[/sup][/sup] if x > 0.[/ul]
But that’s really the point I was trying to make. The context where integration appears to be difficult is in finding closed-form expressions for the antiderivatives of given closed-form expressions. In other contexts/senses integration is easy: ultrafilter’s book on complexity is obviously looking at a different context in which integration is less complex than differentiation; Achernar makes the point that integration and differentiation are both easy in the context of MacLaurin or Taylor series.
So I think we’re beginning to agree on the correct question. And no, I don’t think I gave a satisfactory answer when pointing to the complicated form of some of the symbolic differentiation rules. I just suspect that it would be possible to analyze the complexity of the symbolic computations involved in differentation and integration and that the form of these differentiation rules would be important in that analysis. But now I’m really talking off the top of my head.
Ah jeez. And sorry for calling you Topology, Topologist.
Since we’re talking about numerical computations here, not analytical, why can’t I just compute a derivative by finite differencing?
And Tyrrell McAllister, the function g(x) = e[sup]-1/x[sup]2[/sup][/sup] isn’t infininitely differentiable, unless you restrict yourself to the reals. If you look at its behaviour at complex numbers near the origin, you’ll find that it’s extremely poorly-behaved. But on the other hand, it does provide a handy counterexample to saying that you can represent functions as Taylor series. If you try to construct a Taylor series for that function, you’ll either conclude (correctly) that it’s impossible, or end up with 0.
Wait, now I’m confused, Chronos. Why can’t I represent that function as a Taylor series? I’ll buy not being able to expand in a Maclaurin series, but why can’t I expand about some point other than x=0?
That’s an approximation method, right? I think my source is using a computation model which allows for exact real computation. In that sense, it’s non-computable.
I’ll get more detail when I get home.
Of course it’s not complex analytic. I suppose that it was never stated explicitly, so I guess I should say that I’ve been assuming throughout this thread that the differentiation mentioned in the OP was real differentiation.
You can expand a Taylor series about some other point, but you’ll find that the radius of convergence will not include the origin. This is because the function actually has a nasty pole at the origin if you think of it in the complex plane. Everything looks nice on the real axis, but if you take the limit of the function e[sup]-1/ z[sup]2[/sup][/sup] as z approaches zero along the imaginary axis, you’ll see that it blows up.