I don’t tend to think of differentiation or integration as a process. For my purposes, it’s generally enough to know that the derivative or integral of a function exists. And in that sense, all the information about the integral and all the information about the derivative are contained in the original function.
**
If you know what a limit is, the definition of a derivative can be given in half a line. The definition of an integral takes a paragraph or so. That’s all I mean by “more complicated”.
**
Yes, and if you change a function very slightly on a very small interval around a point, the derivative may change quite a bit.
Besides, as Achernar pointed out, “global” and “local” don’t mean anything precise. Integrals can be carried out over very small intervals.
I assume one of those "antiderivative"s was supposed to be “derivative”. With an antiderivative, you’ve still got that constant of integration which is not determined by the function.
It’s true that you can get an antiderivative locally, and set the constant of integration to whatever you like, but there’s no guarantee that the constant of integration you use at one place will match the one for an antiderivative you get locally elsewhere.
Care to give us an example? Other than divergence to infinity (int(1/x, x=0…1 , for example), I can’t think of any non-integrable functions. And I’m usually pretty good at finding pathological counterexamples, too.
Let f(x) be a function defined on the interval [0,1], such that f(x)=0 if x is irrational and f(x)=1 if x is rational. Such a function is not Riemann integrable: the “upper Riemann sums” are always >= 1, while the “lower Riemann sums” are always <=0, so the Riemann sums never converge.
The function f(x) is, in fact, Lebesgue integrable; it’s Lebesgue integral is 0. There are functions which aren’t even Lebesgue integrable, but they’re harder to construct. The short version is, let g(x) be a function such that g(x)=1 if x lies in a given set A, and g(x)=0 otherwise, where A is a non-measurable subset of the real line. So you need a “non-measurable subset”. I think the following is non-measurable: define the equivalence relation x~y if x-y is rational, and let A be a subset of [0,1] containing exactly one element from each equivalence class of ~. I’d have to spend some time going over my old analysis textbooks to verify that, however. (I’d also have to go back to my textbooks to even define “measurable” properly, to be honest.)
I guess I was considering functions which are “physicist integrable”, because I was considering the function in your first example to integrate to zero (a physicist function or operation is one which has whichever unspecified properties it needs in order to make the problem interesting, well-defined, and/or doable :)) Similarly, you don’t need to dig up the official mathematician’s definition of “measure”, since I, as a physicist, know exactly what that means, without needing a definition.
But y’know, I think that your second example works, so long as you restrict yourself to positive numbers, and you allow the axiom of choice (I think that the equivalence classes might not be well-defined if you allow negative numbers, and you need to choose one element from each class). Rather an interesting set, I must say.
As an example, which I actually ran into once, I had some nasty integral involving Bessel functions. I could get expansions for the integrand for the limits X --> 0 and also for X --> infinity, and could thus find expressions for the integral in both limits. I wanted to say that the integral was zero (IIRC), but I had no way of showing that the constant of integration was the same in the two cases.
Looking at a simple case, let f(X) = -J[sub]1/sub, and use the approximation f(x) = -X/2 for X small, and use f(x) = -J[sub]1/sub for X large. Now try to get the integral from 0 to infinity of f(x). Both integrations are easily performed “locally”:
So (naively) integral{[from 0 to infinity] f(X)} = F(infinity) - F(0) = 0 - 0 = 0. But that’s wrong; it’s known that the integral{[from 0 to infinity] -J[sub]1/sub} = -1.
Actually, multiplication, division and constant roots all take the same amount of time computationally. In fact, they are equivalent problems. That is, a better bound (upper or lower) on one applies to the rest automatically. This is discussed (or given as homework questions) in Aho, Hopcroft and Ullman’s Algorithm book and no doubt others. The standard techniques for such problems given in grade school are not very efficient. Current methods for very large numbers use FFT based algorithms. For some strange reason, FFT is not taught in the 3rd grade.
Some problems in modular arithmetic haven’t been mentioned. Finding squares or doing exponentiation in modular fields is easy. Doing roots or logs appears to be hard. Proving that they are in fact hard would make one instantly world famous overnight.
The analogy I first thought of was multiplying and finding roots of polynomials. Using the quadratic formula to find the roots of a quadratic equation is more complicated than multiplying linear terms, and it rapidly goesdownhillfromthere.
Just as well, they’d just teach them the DFT anyway.
All that that shows is that you have to be careful when you use approximations.
Define f ~ g iff f(x) - g(x) = c, a constant, for all x. ~ is an equivalence relation, so that’s why I feel comfortable saying that a function uniquely determines its antiderivative.
For any x, I can set the value of the antiderivative to any value I want, confident that one function satisfying your equivalence relation will match that value. That’s an interesting definition of “uniquely” you must be using.
Ok, fair enough; I have not provided a precise definition of what I mean by “local” and “global”, but I took the following definitions to be understood.
Given a map [symbol]F[/symbol] that maps a set of functions defined on some domain X into another set of functions defined on X, I say that [symbol]F[/symbol] is local iff, for each point x in X and every neighborhood N of x, if functions f and g in the domain of [symbol]F[/symbol] agree on N, then [symbol]F/symbol and [symbol]F/symbol agree at x. I say that [symbol]F[/symbol] is global iff [symbol]F[/symbol] is not local.
By this definition, if [symbol]F[/symbol] returns the derivative of continuous functions, then it is local; whereas if [symbol]F[/symbol] returns an antiderivative of continuous functions, then it is global.
That may work. But I’m slightly confused on (at least) one thing. Isn’t it true that the antiderivative of a function is not itself a function, because of the constant of integration? Isn’t it instead a set of functions? If so, then wouldn’t antiderivative be an invalid value for F, given its definition?
You’re an engineer or a scientist, right? When I think of a function, I’m not thinking of it as a collection of values–honestly, I’d never though of evaluation until you brought it up–but rather as a point in some space. Turning a space into a “smaller” space whose points are equivalence classes of points in the original space is not unusual.
I should say that given a function f, the set of antiderivatives of f, and the set of derivatives of f are both uniquely determined. The fact that one’s a singleton and the other uncountably infinite is of little import.
If you make the range of [symbol]F[/symbol] into the powerset of X -> A (whatever the range of the original class of functions is), you can get around that.