Question about Fourier series

I was just watching a Khan Academy vid where after having proved that the definite integral from 0 to 2 pi for sin(mt) is zero where m is non-zero, Sal said that that was actually true for any m. I didn’t understand this because the integral works out to -1/m(cos(m(2*pi))) + 1/m(cos(0)). If m is 0, isn’t that undefined? I see that the 1/m terms cancel out but what’s undefined - undefined?

Thanks,
Rob

This is a bit like asking is (x-1)[sup]2[/sup]/(x-1) defined when x = 1. If you think of it as 0/0, then no. But if you let me first cancel the common factor of x-1, then yes.

Not to overly simplify things, but if m = 0, then sin(mt) = sin(0) = 0 for all finite t. Ergo, its definite integral vanishes.

Just missed the edit window.

I presume you got your formula for int(sin(mt)) = 1/m cos(m t) by making the substitution x = m t and going from there. If you think about what that substitution is actually saying for m = 0, you will hopefully conclude that your formula for int(sin(mt)) is only valid for nonzero m in the first place.

I get that the other term cancels out, but it seems like we’re left with undefined * 0. Why can we say that’s 0 and not undefined?

I accept that this is different and that Laplace transforms work, but I don’t understand the rule here.

Thanks,
Rob

The problem is that you’re using a formula for the integral of sin(m t) which is only valid for non-zero m and applying it at m = 0.
Let’s say I want to do the integral. The first thing I’ll do is make a change of variables, say
x = m t => dx = m dt => dt = dx/m.
Then I’m integrating 1/m sin(x) dx, which I know how to do. But I can’t do this for m = 0, because I cannot write
dx = m dt => dt = 1/m dx
when m = 0.
You can, as it transpires, take the m => 0 limit of your expression
Integral( sin(m t) dt, t = 0…2π) = 1/m [cos(0) - cos(2 π m)]
and you’ll get the right answer. For this, we note
cos(2 π m) ~ 1 - 1/2 (2 π m)[sup]2[/sup]
so
Limit[Integral( sin(m t) dt, t = 0…2π), m => 0] = Limit[1/m (1 - [1 - 1/2 (2 π m)[sup]2[/sup]]), m => 0] = Limit[2 π[sup]2[/sup] m[sup]2[/sup]/m, m => 0]
which you’ll find is again zero. I’m not sure you’ll find that convincing, but it works for me.
Or, as previously noted, you can just use the fact that Integral( sin(0 t)) = Integral(0) = 0.
I always forget how annoying typesetting math is here.

defined when its just a point discontinuity in a continuous curve.

If thats infinitity thats the value… not undefined then.
Infinity1 minus Infinity2 is undefined, unless you know how the infinities are generated. So we do know the generator of these infinities… When the same infinity, generated by the same shape of curve etc… is subtracted, that results in zero.

If m = 0, then the integral is obviously zero; the integrand vanishes identically. If you want to justify taking m->0 in the explicit formula you have, use the dominated convergence theorem.

Right, but the value needn’t be a continuous function of the parameter. As it happens, it is, but there is no getting around the fact that the integral of 0 is 0, the only thing that matters.

Well, as Itself noted, the integral’s value IS guaranteed to be a continuous function of the parameter as long as the conditions for dominated convergence are satisfied, which in this case clearly are (as they must be for any situation where the integral is over a compact range, the parameter varies over a compact range, and the integrand is continuous as a function of parameter and variable being integrated with respected to).

It is here. The dominated convergence theorem applies; the integrand is bounded by a constant (independent of m).

I suppose it doesn’t even really matter that the parameter’s full scope is taken to be a compact range, since we can always just pick a compact range for the parameter around the point of interest. Then we find that the integrand assumes some finite maximum magnitude (as a continuous function over a compact space), and therefore the constant function at this maximum magnitude dominates the integrand while being integrable (over a finite range), giving dominated convergence.

So something like Fubini’s theorem: For f : X x Y -> R continuous with X locally compact, and Y compact and finite (in measure), the function f(x) = \int_Y f(x, y) dy is continuous.

Yup, exactly!

Incidentally, g8rguy gave a good account of the limit of -1/m(cos(m(2*pi))) + 1/m(cos(0)) as m goes to 0, but another way to think about it is that this is the limit which defines the derivative of cos(2πm) at m = 0. This comes out to -2π * sin(2π * 0) = 0.

(This was implicit in g8rguy’s evaluation by taking the Taylor series of cos(2πm) out to the 2nd degree; in that Taylor series, we make use of the first derivative of cos(2πm) at 0, and this ends up being the only term that remains when we calculate the desired limit. I’m just noting that a little more explicitly)

Yes, but you don’t need all the fancy machinery to know that the integral of 0 is 0.

Do you people know the beautiful proof of the fact that if you divide a rectangle into subrectangles with sides parallel to the sides of the original rectangle and if each of the subrectangles have the property at least one of its edges has integer length, then the same is true of the original rectangle?

Assume that the original rectangle has vertices (0,0), (a,0), (0,b), (a,b) and calculate \int_0^a\int_0^b(sin 2\pi x)(sin 2\pi y) dy dx.

Oh, that is beautiful. The use of sine is inessential, though; there is a more abstract, non-calculus version of the same argument:

Specifically, for any point (x, y), let f(x, y) = floor(x) * floor(y). And for each rectangle R, let g® = (floor(x2) - floor(x1)) * (floor(y2) - floor(y1)), where the rectangle’s max and min x and y coordinates are x2, x1, y2, y1; thus, g® = 0 if and only if some side of R has integer length. But note also that g® = f(R’s top-left point) - f(R’s top-right point) - f(R’s bottom-left point) + f(R’s bottom-right point).

Now consider the sum of g over all the little rectangles that make up our big rectangle. This amounts to a big sum of f with varying signs and repetitions over all the corners of rectangles in our diagram, where there are three possible kinds of corners: there’s internal corners surrounded by four rectangles (counted as top-left, top-right, bottom-left, and bottom-right once each, and thus cancelling out in our total sum), there’s corners in the middle of the outermost edges (counted twice with opposite sign, and again cancelling out in our total sum), and finally there’s the very outermost corners of our big rectangle (counted just once, with the same sign as for g of our big rectangle).

So g(big rectangle) = the sum of g over all the little rectangles comprising it. So if all the little rectangles comprising it have g equal to 0 (i.e., a side with integer length), so does the big rectangle.

More abstractly, the floor functions here can be replaced by any periodic function of period 1, of which the sine integral approach is one instance.

Also, they don’t have to be the same periodic functions acting on x coordinates and on y coordinates. [I would’ve phrased the whole thing in terms of tensor products of free abelian groups on coordinates modulo integers, to be completely abstract from the start, since that’s how I think about it in my head, but I gather many prefer “concreteness”.]

Ah, I phrased things here with the unusual convention that y increases from top to bottom. Well, you all can swap things to whatever convention you like; flip signs in the last line accordingly. You’ll figure it out; you’ll be fine.

Oh, these “T points” can also be located internally. But, at any rate, they cancel out this way.

Also, this more abstract formulation shows we get a much more general result: consider ANY equivalence relation whatsoever on x coordinates, and ANY separate equivalence relation whatsoever on y coordinates. If each small rectangle has some pair of parallel sides whose coordinates are equivalent, then so does the big one.