This is a special case of Euler’s Theorem, right? I remember learning this in Number Theory (like ten years ago), but not how it was derived. A little help?
By which of course I meant -1.:smack:
Man, after reading John Allen Paulos’s “Beyond Numeracy” when I was in junior high and coming across this amazingly sublime equation, I was the bane of math teachers’ existences…until I took AP Math senior year of high school and one glorious, glorious day, the teacher show us how to derive this.
But I can’t remember exactly how, except that it used polar coordinates and infinite series. Sorry I can’t offer more – just loved the memory you evoked.
It’s just a case of the Euler Formula, which is
e[sup]ix[/sup] = cos x + isin x.
You can see how plugging pi into that gives you the result.
The Formula can be derived from the series expression of e[sup]x[/sup], and remembering that sine is the odd terms and cosine is the even terms. See the link, or maybe other posts in this thread, for that.
Euler’s Theorem, refers to any of a number of theorems of Euler, as mentioned in the link.
Can’t you kids do your own homework?
Since e^iX = cos X + i sin X (Euler’s formula)
e^ipi = cos pi + i sin pi
= -1
e^x = 1 + x + x^2/2! + x^3/3! + x^4/4! . . . .
so
e^ix = 1 + i + i^2 * x^2 /2 + i^3 * x^3/3! + i^4 * x^4/4! . . . .
= 1 + i - (x^2/2!) -i (x^3/3!) + (x^4/4!) + i(x^5/5!) . . .
= (1 - x^2/2! + x^4/4! . . . ) + i (1 - x^3/3! + x^5/5! . . . . )
= cosx +isinx.
Thus,
e^(pi *i) = cos (pi) + i * sin(pi) = -1.
You could also play with the Taylor series expansion, show that
e^x = sum (n=0 to infinity) X^n/n!
(this can be derived by remembering if f(x)=e^x, so is the (n+1)th derivative of f(x)=e^x, then using the remainder and squeeze theorems).
Hence, e^x = 1 + x + x^2/2 + x^3/6 + x^4/24 + …
Also recall i^2=-1, thus i^3=-i and i^4=1, hence i=i^5=i^9…; i^2=i^6=-1…
Ah forget it.
I fixed the title for you.
DrMatrix - General Questions Moderator
In higher mathematics it is a matter of definition not of proof that
e[sup]ix[/sup] = cos x + i sin x ()
( that is assuming the elementary theory of infinite series). Nevertheless, it is possible to use the elementary theory of differential equations to make () plausible. Since e[sup]ix[/sup] is a complex function, write
e[sup]ix[/sup] = f(x) + i g(x)
Differentiating twice and assuming that e[sup]z[/sup] behaves for complex z as it does for real z ( this is why it’s plausible, not a proof)
-e[sup]ix[/sup] = f’’(x) + i g’’(x)
Equating real and imaginary parts,
f’’(x) = -f(x)
g’’(x) = -g(x)
Together with the obvious initial conditions, this gives
f(x) = cos x
g(x) = sin x
My Complex Analysis prof in college wrote it out like this:
e^i(pi)=0-1. He said that you now have the five most important numbers in mathematics in one equation: 0, 1, e, i and pi!
Haj
You guys are full of baloney.
Everybody knows that e^(i pi) = -0.99999…
I always liked it better as e[sup][symbol]ip[/symbol][/sup] + 1 = 0.
No. The Taylor expansions of both sides are identically equal. See a few posts back.
Now that you mention it, that must have been how he did it. Thanks and nice coding by the way.
Haj
TGWATY: That’s sort of what I meant by “a matter of definition”. Sin z, cos z and e[sup]z[/sup] are all defined as power series and it is obvious from these series that
e[sup]z[/sup] = cos z + i sin z
Jabba, no sin and cos are defined in terms of ratios of sides of right triangles to the hypotenuse.
e is defined as the limit as n goes to infinity of (1 + 1/n)[sup]n[/sup]. Historically, I think it was first discovered to be the value taken by x such that
∫[sub]1[/sub][sup]x[/sup] 1/u du = 1
The identity of their Taylor series is apparently just a happy accident.
Yes, you could equivalently define them in terms of their Taylor series. But then you still get this surprising relation between the value of certain integral and the ratio of sides of right triangles.
e is a very strange critter. See here for more.
sin, cos, and exp are defined as Taylor series in higher mathematics (cf. Rudin). ln is defined by the integral.
Any relation to right triangles is coincidental.
Since it’s possible to derive the Taylor series of sin and cos from the “right triangles” definition of those functions, I wouldn’t say the relationship is just a coincidence. However, I would agree that in calculus textbooks those functions are defined more often using the Taylor series than using right triangles.
Sine and cosine are defined by their series in “higher” math simply out of convenience. You don’t have to go to the trouble of defining geometrical entities like triangle and angle when all you need is an algebraic expression. But that doesn’t make it prior.
“Any relation to right triangles is coincidental” is the silliest thing I have heard today. So-called higher math texts are silent about the triangle origin because it is assumed you already know that.
Nobody introduces sine/cosine to a student wholly ignorant of them by defining them as a series expansion and then adds, btw, there is also this accidental triangle relation.
Try looking it up in an encyclopedia or dictionary and you’ll see what I mean. I would argue that such most-common definitions are the “real” definitions. E.g., that π is the ratio of the circumference to the diameter of a circle. There may be a dozen other equivalent definitions, but that is the “real” one.
Yeah, OK, that was a bit over the top. Still, as you pointed out, it’s more convenient to do the Taylor series definition, and that’s the one that’s used.