OK, this should be child’s play for anyone who has taken sequential & multivariable calculus and actually paid any attention. I need a comprehensive and simplified version of the proof of Euler’s formula (of the variety e to the xi = cos + i sin x). I tried an internet search and found multiple proofs, none of which I could make sense of, probably due to a not-very-good understanding of Taylor series. Someone help please!!! This is driving me crazy; not to mention I have a pretty hefty test coming up in 5 days. If anybody can help me, please try & show as many intermediate steps as you can with respect to factorials (my other weakness) and Taylor expansions. Thanks!!
Oh yeah. In order to do this, you really really need that understanding of Taylor series, or at least Maclaurin series. I suggest that you read this post carefully, and if you get lost anywhere, say exactly where.
Here’s the idea. If you have two continuous and infinitely differentiable functions, and their values at a point (say 0) are the same, and the values of all their derivatives at that point are the same, then they’re the same function. Yeah, it sounds tricky, but it’s not so bad, if you know how to differentiate. Let’s consider the easiest function in the book: f(x) = e[sup]x[/sup]. Let’s look at all its derivatives’ values at 0 (if you don’t know how to do this part, say so, and someone’ll explain that too):
f(0) = 1
f’(0) = 1
f’’(0) = 1
f’’’(0) = 1
etc.
See a pattern? So the idea is that if we have another function such that g(0) = 1, g’(0) = 1, g’’(0) = 1, g’’’(0) = 1, etc., then g(x) = f(x). Now, it turns out that it’s easy to construct such a g(x) in the realm of polynomials. Consider the following polyomial:
g(x) = A[sub]0[/sub] + A[sub]1[/sub]x + A[sub]2[/sub]x[sup]2[/sup] + A[sub]3[/sub]x[sup]3[/sup] + A[sub]4[/sub]x[sup]4[/sup] + ···
and start taking derivatives. I’ll trust that you can do this part on your own, but if not, again say something. Here’s what you get:
g(0) = A[sub]0[/sub]
g’(0) = A[sub]1[/sub]
g’’(0) = 2A[sub]2[/sub]
g’’’(0) = 2·3A[sub]3[/sub]
g’’’’(0) = 2·3·4A[sub]4[/sub]
See a pattern? If not, here’s the general form:
So, what if we define all the A’s like this:
A[sub]n[/sub] = 1 / n!
Then we have:
and so:
Now this is exactly what we were looking for. If we write out our g(x), plugging in the assignments for the A’s, we get this:
g(x) = 1 + x + x[sup]2[/sup] / 2! + x[sup]3[/sup] / 3! + x[sup]4[/sup] / 4! + ···
Or, since this is exactly equal to f(x), we have this:
e[sup]x[/sup] = 1 + x + x[sup]2[/sup] / 2! + x[sup]3[/sup] / 3! + x[sup]4[/sup] / 4! + ···
Isn’t that wicked cool? We have a really off-the-wall function like e[sup]x[/sup] written in terms of a polynomial, albiet an infinite one! In general, you can write any simple function which is infinitely differentiable at 0 like this:
h(x) = h(0) + h’(0)x + h’’(0)x[sup]2[/sup] / 2! + h’’’(0)x[sup]3[/sup] / 3! + h’’’’(0)x[sup]4[/sup] / 4! + ··· + hsup/supx[sup]n[/sup] / n! + ···
The above formula is called the Maclaurin series expansion of a function. Memorize it if you haven’t done so already. Once you understand this so far, try something yourself. At the very least, try to write sin(x) and cos(x) in terms of a polynomial. After that, I’d say you’re about 75% of the way to understanding the derivation to Euler’s formula.
Thank you so much for your help–that pretty much cleared up the Taylor expansion shakiness. Now when I look at the expanded form of the Euler proof I find I understand it, but I still don’t get the “condensed” summation version. I see that the first part is cos x and the second part equals i sin x, but that’s just because I have memorized what cos x and sin x are; I don’t understand how I am supposed to look at the expanded version of anything and turn it into all that whatever over n! stuff. Another thing I don’t get is, in the first part (the part that equals cos x), why is the bottom (2n!) instead of n! squared?? (sorry; I don’t know how to make exponents on here & all that fancy stuff!) It seems to me that on top all they did was square the terms, so why didn’t they square the bottom??
I think your confusion comes from not knowing enough about power series. A power series is the sum of a[sub]n[/sub]*x[sup]n[/sup], where n ranges from zero to infinity. A Taylor series is just a specific kind of power series, where a[sub]n[/sub] = fsup/sup/n!. Why do they do it that way? Well, it just works right, basically. There are a lot of nice theorems that depend on Taylor series to make things work out.
nevermore: “I see that the first part is cos x and the second part equals i sin x, but that’s just because I have memorized what cos x and sin x are; I don’t understand how I am supposed to look at the expanded version of anything and turn it into all that whatever over n! stuff.”
The truth is that in general, there is no way to look at a Taylor expansion of a function and determine what that function is. Basically, you just have to memorize the simplest ones and see if you can work from there. At the very least, have memorized the Maclaurin series for e[sup]x[/sup], sin(x), and cos(x).
Now, as for why there’s a (2n!) involved in your proof, well, I’d have to see exactly what proof it is to let you know why they do that. But I have an idea. Here’s what I think it is. As you know, the Maclaurin series expansion for cos(x) is as follows:
cos(x) = 1 - x[sup]2[/sup] / 2! + x[sup]4[/sup] / 4! - x[sup]6[/sup] / 6! + x[sup]8[/sup] / 8! - ···
If you want to write this in Sigma notation, there a couple of ways to do this. The best way to do it, though, is very ugly. Here it is:
Σ (-1)[sup]n[/sup] x[sup]2n[/sup] / (2n)!
Where n goes from 0 to ∞. In order to understand why it has to be like this, try writing out the first few terms. In general, if you want to understand a sum like this, that’s the way to do it - write out a few terms, until you see the pattern.
In tex code (sorry, but it ought to be reasonably transparent), the question is how to prove that e^{ix} = cos x + i*sin x. Now in calculus, you learn that
e^x = 1 + x + x^2/2! + x^3/3! + x^4/4! + x^5/5! + …
sin x = x - x^3/3! + x^5/5! + …
cos x = 1 - x^2/2! + x^4/4! + …
Now if you replace x by ix in the first formula and realize that i^2 = -1, i^3 = -i, i^4 = 1 and so on, and collect the odd powers on one place and the even in another, factor the i out of from the even terms (there should be none left in the odd), then you should get cos x + i*sin x. If you do it for yourself, you ought to see, while if I do it, probably only I will see it.
There’s a really good, fairly “plain English” explanation of how e^ix is derived in Eli Maor’s book “E: The Story of a Number.” In particular, his explanation of e^i(pi) +1 =0 was understandable even to a non-math person like me.
Achernar:
Yeah, Sigma notation, that’s what I was talking about–that’s the form of the proof that I find every time I do a search. It uses the fact that e^x = x^n/n! and just plugs in ix for x. (I am going to use E as my Sigma symbol- thing because I am a clod, so bear with me.) It states that e^ix = E [(-1)^n times x^2n]/ (2n)! + i E [(-1)^(n-1) times x^(2n-1)] / (2n-1)! , with the first term from n=0 to infinity and the second term from n=1 to infinity.
This notation is what is really screwing me up: I can look at the expanded version and sort of see that it works, and maybe I could come up with this notation if I expanded the polynomial out far enough, but I don’t even particularly want to do that: I just want to be able to algebraically get this Sigma notation from e^ix. Here’s what I get & don’t get about it: I see that the first term equals cos x, and that the second term equals i sin x, but again this is just because I have memorized their Sigma notations. I see where the (-1)^n and the x^2n in the numerator of the first term come from, I think; it looks like they just squared (ix)^n. But, if this is so, then they would have had to square the denominator as well, and wouldn’t that give (n!)^2 ? Is (2n)! just another way of saying that?? As for the second term, I don’t understand any of that. Where the hell did that shit come from?? If was figuring this out algebraically from e^ix, I would think I was totally done after the first term. Is this just one of those things you can’t figure out algebraically; you just have to look at the expansion?
Short answer, yes, but I think that will probably be true any time you have Taylor series involved. Long answer:
The ability to work in Sigma notation is an admirable goal, and while you may be able to achieve it, I don’t really know how. The Sigma notation that I always see in proofs like this strikes me as rather artificial. Let me explain what I mean. In general, the Maclaurin series for a function is this:
f(x) = Σ fsup/supx[sup]n[/sup] / n!, n = 0…∞
I know that I used this notation before, but just to be clear, fsup/sup is the nth derivative of f(x), and fsup/sup is the value of the nth derivative of f(x) at 0. How difficult it is to write f(x) in Sigma notation all boils down to how difficult it is to write fsup/sup in terms of n. For e[sup]x[/sup], it’s very simple, since the sequence { fsup/sup } = { 1, 1, 1, 1, 1, 1, ··· }. However, it’s seldom so simple. For the next most difficult function, e[sup]-x[/sup], { fsup/sup } = { 1, -1, 1, -1, 1, -1, ··· }. But it just so happens that { (-1)[sup]n[/sup] } = { 1, -1, 1, -1, 1, -1, ··· }. Yeah, it works out, but does e[sup]-x[/sup] have anything to do, algebraically, with (-1)[sup]n[/sup]? Not really. Even so, the simplest way to write the Maclaurin series in Sigma notation is using it:
e[sup]-x[/sup] = Σ (-1)[sup]n[/sup]x[sup]n[/sup] / n!, n = 0…∞
I know of no way to derive this equation in Sigma notation without calling explicitly on the fact that { (-1)[sup]n[/sup] } = { 1, -1, 1, -1, 1, -1, ··· }. Now take a function like cos(x). It’s that much more difficult to come up with something that works, becuase for cos(x), { fsup/sup } = { 1, 0, -1, 0, 1, 0, -1, 0, 1, 0, -1, 0 ··· }. How to represent this sequence? You could do something completely dorky like { cos(nπ / 2) }, but that’s sort of defining something in terms of itself. Another way would involve a couple of terms, like { [(-1)[sup]floor(n / 2)[/sup] + (-1)[sup]floor((n + 1) / 2)[/sup]] / 2 }, where floor() is the greatest-integer function. (There’s got to be a better way to do that, but that’s just off the top of my head.) Now, both of these ways are fairly ugly, as well as, I’ll bet, nigh impossible to derive. The best way to write it, since every other term in the sequence is 0, is simply in terms of 2n: { fsup/sup } = { (-1)[sup]n[/sup] }. Our sequence now is missing half of the previous terms, but since these had values of 0, they wouldn’t factor into the final Sigma (heh) at all. What you get from this is the summation:
Σ (-1)[sup]n[/sup] x[sup]2n[/sup] / (2n)!, n = 0…∞
Now, does it seem like I sort of pulled this series from thin air? It should, because that’s what I did. The truth is that in proofs, as long as your Sigma works, nobody will question where it came from. There is no simple way to go from an expanded series to Sigma notation in general. If there were, there wouldn’t be three wildly different forms of cos(x) that I mentioned.
Additionally, you should realize that nowhere is anything squared in this proof, or at least, not the ways that I’ve seen it done. Also realize that (n!)[sup]2[/sup] ≠ (2n)!, in general. Those values are in there because that’s what the series happens to look like.