Why is differentiation always doable algorithmically, but integration not?

You’re slowing down, dope.

Obigxkcd:

Maybe he or she is talking, not about functions that are not integrable or not differentiable, but about differential fields of functions that may or may not be closed under certain types of extensions. For instance, logicians are able to construct differentially closed fields which already contain the solution to every system of differential equations. And, on the other hand, there are functions like the error function which are not elementary functions as a result of Liouville’s principle. So people do study such classes of functions and there is some differential Galois theory applicable.

You can simplify even further than that:

Miklós Laczkovich removed also the need for π and reduced the use of composition.[5] In particular, given an expression A ( x ) in the ring generated by the integers, x , sin x^n , and sin( x sin x^n ), both the question of whether A ( x ) > 0 for some x and whether A ( x ) = 0 for some x are unsolvable.

It’s related to the fact that most integrals are solved via the Fundamental Theorem of Calculus as opposed to derived from the definition of the integral. All derivative rules start with the definition as the limit of the difference quotient, and work out simple results and proceed to work out more difficult results as more rules are proven. I do know that we solved a few definite integrals directly from the limit definition for maybe one day of class, but that approach was abandoned fairly quickly (and wasn’t applied to indefinite integrals) since the FTC was known and makes things much easier. When your primary method of solving integrals is reversing some other process, entropy is definitely going to impede your progress sometimes.

I’m not sure what functions can be analytically integrated directly from one of the integral definitions there are out there, but I imagine it’s much harder since the rigorous definitions of integrals are somewhat abstract and don’t define anything resembling a constructive algorithm like that one you get from the limit of the difference quotient as the definition of the derivative. Sure, in the rigorous definition you get your epsolons and deltas there too, but you’ve got an expression you can evaluate before you apply the e-d definitions, and that’s just not the case for integrals.

sqrt(1 + x^4) would be an example, right?

I believe that is a non-elementary elliptic integral; someone who knows what they are doing should be able to prove it is non-elementary by applying Liouville’s theorem plus some complex analysis.

If you start from elementary functions including roots/trigonometric/exponential/logarithmic and compositions of those, you get a bigger class of functions which now includes the error function and elliptic and logarithmic integrals and so on. It does not include things like arbitrary hypergeometric functions, Bessel functions, gamma functions, …

Interestingly, as noted in the OP, for most closed form functions you come across it is easy to close form derivatives but not integrals for many closed form functions, But paradoxically you drop the closed form requirement, the reverse is true. Every function that is bounded on an interval has a anti-derivative over that interval, but not every bounded function is differentiable.

On thinking some more about it: The integral of a function depends on its values everywhere (at least, as far as the limits of integration), but the derivative only depends on values very close to the point you’re looking at. This certainly explains why integrals more often exist (because defects at individual points are drowned out by the behavior elsewhere), and I think it might also be considered to account for why it’s harder to find integrals.

But back to the problem of determining that two expressions are equivalent: In the real world, given any two expressions that can be evaluated, it’s really easy to determine, to a very high degree of confidence, that the two expressions are equivalent. I get that “sample a bunch of points to high precision” doesn’t constitute proof, but are there any known pairs of expressions that pass the “sample a bunch of points” test (i.e., which are very strongly suspected to be equivalent), but for which the equivalence isn’t proven?

Adapted from a post on math.stackexchange.com:
gcd(n17+9, (n+1)17+9) = 1

Works for all n until you get to 8424432925592889329288197322308900672459420460792433.

Ok, that’s not a real-valued function, but still…

Certainly an example of what I said. Whether there is a known integral is another question. There are cites that do indefinite integration of any function for which there is a known integral. It might be interesting to try that. But if it fails, that is suggestive, but not proof.

You don’t even need to drop the closed-form requirement.
consider f(x) = |x| [That’s absolute value]
There’s a closed form, always-defined integral of f(), but it’s differential is not defined at zero.

Though, I guess you could argue that absolute value isn’t really a closed form; it’s one of those bastard heretical functions with special cases and ‘if’ in the definition.

Elliptic integrals are hardly a mysterious unknown case, as we saw. But, in general, the practical answer seems to be that there are computer implementations of algorithms where you type in your function and the claim is that either you get a proof that your integral is or is not elementary, or you get an error message and still do not know. Example: http://fricas-wiki.math.uni.wroc.pl/RischImplementationStatus
But I would not say you get “suggestions”, assuming of course there are no nasty bugs somewhere (could always try a couple of other programs)

I meant to write this earlier, but it is instructive to try to evaluate this integral. Or, let’s begin with integrating x sqrt(1 + x^4) for comparison. To get rid of the radical, let u^2 = x^4 + 1 and work on the compact Riemann surface corresponding to u^2 = x^4 + 1.

We want to integrate ux dx, so expand that around its poles which are off at infinity. (Namely, one pole is at x → infinity, u/x^2 → 1, and the other is x → infinity, u/x^2 → -1.) Letting t = 1/x, so dx = (-1/t^2) dt and expanding around the two poles we get
x = 1/t
u = ±(t^(-2) + (1/2)t^2 + …)
ux dx = ∓ (t^(-5) + (1/2)t^-1 + (1/8)t^3 + …) dt

So if we did this right we have two poles of order 5; now to integrate the ±t^(-5) we have to find an appropriate rational function with poles of order at most 4 at infinity; in this case
ux^2 = ±(t^-4 + (1/2) + …)

So (1/4)ux^2 is the desired function. As for the ±(1/2)t^-1 dt, we see from the expansions above that
u + x^2 = 2 t^-2 + (1/2)t^2 + …
around one of the points, and
= -(1/2)t^2 + … around the other, a double pole and a double zero.
So its logarithmic derivative d(log(u + x^2)) = (2x/u) dx has residues ∓2, so divide it by 4.

Check: d(ux^2 + log(u+x^2)) = 4ux dx
i.e. an antiderivative of x sqrt(x^4 + 1) is
(1/4)(x ^2 sqrt(x^4 + 1) + log(x^2 + sqrt(x^4 + 1)).
This integral turns out to be really easy; we did not have to do any serious linear algebra or even deal with complex numbers.

Now let’s try the original sqrt(x^4 + 1), which is u dx .

Expanding at the poles,

u dx = ∓(t^-4 + 1/2 + …), so there will be no logarithm and we just need a linear combination of x, u, x^2, ux with the desired principal parts. This is again not hard, obviously ux works, but this time
u dx - d(ux/3) is a nonzero holomorphic differential, which cannot be the differential of some function.

So we are starting to see the source of the impossibility of finding an elementary integral: before we could do it since we were lucky enough that ux dx - d(ux^2/4) was just a logarithmic differential, but that won’t work to integrate the elliptic integral dx/u.