Math: Intergals harder than derivatives, why?

OK, so: Subtraction undoes addition; division undoes multiplication; integrals undoes derivatives. But, in all these cases except the latter, it’s just a matter of mastering the ability to reverse how you think. But, in the latter, there’s more to finding an integral than reversing your thinking. And, many integrals are counter-intuitive, or perhaps I should say “non-obvious”. Why are integrals so blinking hard? (I trust there’s a factual answer, otherwise Administrators, bump it to IMHO!)

  • Jinx : confused:

There’s some factual basis to it. First: Why are derivatives so easy? (OK class, stop snickering.) Every elementary function (power, exponential, log, trig function) has a known derivative and every arithmetic operation has a corresponding differentiation rule, as does composition. So, given any function built up from elementary functions by arithmetic and composition, which is to say, given any formula you can write, finding its derivative is completely mechanical. The only thought required is in parsing the formula into its consituent parts to see what rules you need to apply.

There are integration rules analogous to some of the differentiation rules, but not all, and therein lies the problem. There is no product rule for integration; integration by parts is the closest you get. There is no quotient rule for integration and there is no chain rule. There is change of variables (substitution), but it does not apply as generally as the chain rule for derivatives.

This is not just a lack of imagination on the part of mathematicians. There are plenty of formulas you can write down that you can prove have no elementary antiderivatives. One of the simplest and most well-known is exp(-x^2). The existence of these formulas shows that there cannot be integration rules covering all cases the same way the differentiation rules do.

Well, the derivative of an elementary function is always an elementary function, but the converse is not true. And there’s more involved in taking an integral–look at how much space you need to define the integral as opposed to the difference.

I’d be lying if I said that I could think of a simple answer. Complexity theory (the study of how hard a given problem is, and why) is still pretty young, and generally tends to stick with combinatorial problems. I have a book on the complexity of real functions; I’ll see if it says anything.

But there are other cases that you left out: what about exponentiation and logarithms? Or, finding the cube of a number is a lot easier (it’s just multiplication) than finding the cube root. Although some inverse functions may seem just as easy (I dunno, division always seemed harder to me), the inverse problem in mathmatics is notorious.

All right, I looked in that book, and the author does not directly compare integration and differentiation. It looks like you need fairly strict conditions on a function to be able to compute its derivative, and less strict conditions to be able to compute its (definite) integral. So maybe integration is actually easier!

I haven’t really studied that book in great detail, and to do so would require a fair amount of effort on my part (recursive measure theory doesn’t sound like fun). If I do happen to find anything, I’ll get back.

Keep in mind the difference between numerical and symbolic integration. We can’t take the square root of 2 symbolically, until we just add the symbol “sqrt(2)” to our lexicon and move on. No, “1.4142135…” doesn’t count; it’s just an approximation. But “1.4142135…” is a numeric answer even if it isn’t a symbolic one.

Same with integrals. Functions which can’t be integrated at all are in a sense unusual; for example, most non-mathematicians would be hard-pressed to think of a non-integrable function. But that’s numeric integration. It’s symbolic integration that’s really tricky; but all that means is that we don’t always have the symbols to express an integral symbolically, and occasionally have to add a new symbol (like “erf(x)”) to our lexicon and move on.

And occasionally we can express an integral symbolically but have to do some backflips, like integration by parts, first. Again, that has to do with the fact that our symbols aren’t always adequate to represent the integrals we’re looking for.

My old calculus book agrees with Topologist and says the following, “The differential calculus furnished us with a General Rule for differentiation. The integral calculus gives us no corresponding general rule that can be readily applied in practice for performing the inverse operation of integration. Each case requires special treatment … we must be able to answer the question, What function, when differentiated, will yield the given differential expression?

“Integration, then, is a tentative process …”

Here’s my thought. The derivitive involves attaining information from something you already have. Technically… all of that information is encoded in the function and you’re just extracting it. You can easily find the slope of the function by looking at the function’s graph right around that point. For a given point on the function there is only one slope.

The integral is different though, because your trying to infer info about a more complicated function. You’re trying to figure out what something is based on information about it. Thats a bad way of putting it. Basically the integral is saying “for this function, what points on a section function will have points with slopes equal to corresponding points on my function.” There happen to be more than 1 possible answer for indefinate integrals… hence the + C at the end of integral answers.
A derivitive only has one answer.

If I said heres a vegetable, what color is it? You could easily answer. (like the derivitive)

But if I say “a vegetable has this color”, what is it? Then it’s alot more complicated. (integral)
Maybe that has nothing to do with it. I had this whole baseball card analogy planned out but I kept finding flaws with it. Dah well. Good luck in your search for truth.

Also addition and multiplication are their own inverses. Subtraction is just adding the negative and division is multiplying by the multiplicative inverse. You can’t do this with integration.

I am trying to think of a more “fundamental” reason for why integration is harder than differentiation, other than “because there’s an algorithm for differentiating any given elementary function, but there isn’t an algorithm for integrating”. An answer like that just pushes back the question. Why is there an algorithm for differentiating, but not for integrating?

I think that a more satisfying answer might have something to do with the local nature of the derivative versus the global nature of the integral. Finding the integral of a function f at two points a and b is the same as finding the area under the curve of f between the points a and b. This area depends on all the values that f takes over the entire interval between a and b. So, the integral gives you global information about a function; that is, information about how the function behaves over a large region.

On the other hand, taking the derivative of f at a and b is the same as finding the slope of the tangents at these two points, which involves only information about the values that f takes in arbitrarily small neighborhoods around each of these two points. So, the derivative only gives you local information about a function; that is, information about how the funtion behaves in very tiny regions.

To summarize with a diagram,




    Type of Input                        Type of Output
{ global information }  Integration
{         ie         } -------------> {global information}
{ an equation for f  }
{ global information }  Differentiation
{         ie         } -----------------> {local information}.
{ an equation for f  }

Integration gives a qualitatively more general kind of information, so it is unsurprising that it is more difficult.

Yeesh. Sorry about my “diagram” messing with the formatting like that.

That local vs. global thing makes a certain amount of sense, but I don’t buy it. Integration is only global for definite integrals. Antiderivatives are just as local as derivatives.

Here’s a book that the OPer and others might find interesting. I haven’t read it myself. I’ve only flipped through it in book stores.

Inverse Problems : Activities for Undergraduates

From the back:

What do you mean? If I have a function f, and I find an antiderivative F ( that is, a function F such that, at every point x, F’(x) = f(x) ), then I can find any definite integral of f simply by computing F(b) - F(a). So if you agree that definite integrals are global, then you must concede that indefinite integrals are global, since we may immediately compute definite integrals from them. (Assuming, of course, that f is an elementary function)

I can’t quite explain why, but intuitively this strikes me as being a bit similar to multiplication vs. factoring.

What is 597 x 3343? Relatively easy: 1,995,771

But: what are the factors of 1,995,771? Tougher problem. (For this example I’m not even s ure if they’re the prime factors.) Modern cryptography relies on this asymmetry between the ease of multiplying large numbers vs. the difficulty of factoring them. There is a sense of “trying to get the eggs back out of an omelette”.

Not sure what any of this has to do with the difference in complexity between integration and differentiation. Interesting question.

The global vs local argument is also the reason numerical integration is hard, particularly when you consider Control Theory. If you have a sensor that measures velocity, then you are dependent upon the exacting accuracy of every bit of information from the beginning in order to calculate position, while if you have a sensor that measures position you are only dependent upon a few bits of info around the examined time to calculate velocity. Also, as time moves on you accumulate errors when you have to integrate to find position while errors are noncumulative when you take a derivative to find velocity.

In effect, integration does not undo derivation the way division undoes multiplication. You could say some info is lost in the derivation process, and that info must be put back in when you integrate. Or, a formula contains all the information about its derivatives, but only part of the information about its integrals. In multiplication and division no information is lost.

This line of thinking has merit. Generally, it’s easy to go from a general form to a specific form, but very hard to go from a specific form to a general form. That’s why deduction is easy, while induction is hard.

In the former case, you have all the information you need, you just apply it to a specific situation. In the later case, you are trying to make heads and tails out of the information you have on your hand, sometimes like a blind man groping an elephant.

Your concepts of global and local are not well-defined enough for you to make this assertion. It’s possible that I misunderstood what you meant by them, but from what I could tell, antidifferentiation is local under the meaning you ascribe to that word.

A function f is related to its antiderivative in exactly the same way that it’s related to its antiderivative, in the sense that f contains enough information to uniquely determine either related function.

I still think the key here is that integration is a very complicated process compared to differentiation. That gives it more power, but it makes it harder to do.

Heres the thing. When you take the antiderivitive, you’re almost trying to ‘undo’ a previous process. The problem is that you might not be able to clearly see the process that was used.

For instance.
If we have a function such that: y = xe^x

Then the derivitive is: y’= e^x + xe^x
If you think about taking the integral of y’ you’ll probably be considering two integrals- the first one is the easy e^x, and the second could be an integration by parts. It’s not obvious that the function was the derivitive of xe^x. True- that relationship is still there- the information is there- but it’s harder to infer. Now there could be any number of possible equivilant answers. If you do it the way I suggested, you will get the original xe^x +c.

The second part of your argument makes no sense to me. Saying that integration is harder because it’s more complicated doesn’t answer the question- why is it more complicated?

As for the whole “global” vs “local” argument- I thought about it, but I have one gripe. The slope of the fuction is determined by the other points around it. If you have one point in space,it doesn’t have a slope. There has to be a change in something- which can only exist relative to something else. Just a thought. IANA Mathematician.