e [sup]π * i[/sup] + 1 = 0
e is the natural logarithm (also irrational)
π is pi. Mmmmm pie.
i = imaginary unit
e [sup]π * i[/sup] + 1 = 0
e is the natural logarithm (also irrational)
π is pi. Mmmmm pie.
i = imaginary unit
e is the base of the natural logarithm function.
To understand why this is the case, it helps to realize that you can express both exponential functions and trigonometric functions as sums of infinite series, and when you do that you see that the two are connected. Such that, specifically, e^(*i**theta) = cos(theta) + i sin (theta).
You’re right. Thanks.
There is a relatively easy proof that pi is irrational. I once went through a proof that pi (and e) are transcendental and I have never gone through a more complicated proof in my life. Yet in the end it was based on a simple fact. If you have an integer and it is known to be positive, then it is at least 1.
I gather that Pi is more than just the ratio of a circle’s circumference to its diameter- that it turns up in quite a few places. What makes this number so special?
I feel like sometimes the discussions of 'why is e ubiquitous" boil down to discussions of why exponential functions in general are important, which seems to be a little missing the point.
The simplest way to explain it, for me, is that exponential functions in general, for any base, whether it be 2, 10, 3, 3.14159, 1.414, whatever- all have the property that the rate of change of the function is always proportional to the value of the function,
dN/dt = kN.
The number e is unique in that it’s the only base for which the constant of proportionality is 1. Or equivalently, the base of the only exponential function which passes through (0,1) with a slope of 1.
Well, to start with, periodic motion is connected to circular motion, and all periodic functions- pendulums, electromagnetic waves, sound waves, water waves, etc.- can be expressed in terms of sines and cosines. (If you project the y- and x- coordinates of a point moving around in a circle, you get sine and cosine waves). So every phenomenon which exhibits any kind of periodicity- and there are a lot of those- is going to have Pi in its equation.
Beyond that, there are other non-periodic functions where Pi turns up (in a lot of statistical distributions, for example), but I’m not mathematically skilled enough to tell you exactly why.
Now I want a drink, alcoholic of course, after the heavy chapters involving quantum mechanics.
This. And very well-explained.
In other (very loose) terms, *e *is to exponential functions like 1 is to addition. Similarly, pi is to periodic functions like 1 is to addition.
Everything just gets simpler to express when you use “natural units”, and e is the “natural unit” of exponential functions and pi is the “natural unit” of periodicity. Again in a loose arm-waving sort of way.
Well, saying all periodic functions is going a bit far. But for most of the ones we’re interested in, it’s true, at least on a basic-model level.
Another way that Pi sneaks in unexpectedly is whenever we’re thinking about things related to areas versus distance. It’s not that surprising if Pi sneaks in when we say “On average, wow many trees are going to be within 2 miles of some point in a typical upland forest”, because we can visualize the physical circle being drawn, and the radius and area thereof, and we know Pi relates those two. But that kind of question – How many X’s within Y distance of some point-- comes up a lot, even when it’s not a physical distance where we don’t expect to find Pi. And, any time you’re dealing with squares, it often ends up being a question of distance.
For instance, the question “What are the chances that if I take two different random numbers between 0 and 1, square them, and add the squares, the result is then greater than 1?” You might not expect Pi in the answer, but what that question is really asking is “If I take a random point in 1x1 square, what are the chances that it’s within one unit of the lower left corner?” which is the same as saying “If I inscribe a circle inside a square, what portion of the square is covered by the circle?” And now it’s pretty obvious that Pi is going to be in the answer.
Another way to express that property is that the functions exp(ax) are eigenvectors of translation. I.e. if you have a translation operator T[sub]δ[/sub] such that for all functions f, T[sub]δ[/sub]f(x) = f(x+δ), then T[sub]δ[/sub]f is proportional to f, if and only if f = x -> exp(ax).
Since a lot of systems exhibit space or time translation symmetry, these eigenvectors show up a lot (and, of course, if a is imaginary, you get trigonometric functions).
For what it’s worth, one of the ways π pops up in contexts that seem not to have to do with circles is through the fact that (1/2)! = sqrt(π)/2. But there is a connection here: it is in general the case[sup]*[/sup] that the ratio of volumes between the n-dimensional regions described by |x[sub]1[/sub]|[sup]1/p[sub]1[/sub][/sup] + … + |x[sub]n[/sub]|[sup]1/p[sub]N[/sub][/sup] < K and by max(|x[sub]1[/sub]|, …, |x[sub]N[/sub]|) < K is given by the reciprocal multinomial coefficient (p[sub]1[/sub]! * … * p[sub]n[/sub]!)/(p[sub]1[/sub] + … + p[sub]n[/sub])!. So that, in particular, the area of a circle [i.e., |x[sub]1[/sub]|[sup]2[/sup] + |x[sub]2[/sub]|[sup]2[/sup] < K] is (1/2)![sup]2[/sup] times the area of its circumscribing square [i.e., max(|x[sub]1[/sub]|, |x[sub]2[/sub]|) < K].
From this point of view, it’s (1/2)! which is the fundamental quantity, and the circle constants happen to be derived from this. And that (1/2)! happens to show up in many contexts having seemingly little to do with circles (the normal distribution, Stirling’s approximation, etc.) is not surprising, since factorials would show up in such contexts naturally.
[*: I thought I had written up a proof of this fact on the SDMB somewhere before that I could link to, but apparently not. If need be, I’ll write one up later.]
Whoops, take the Ks above to all be 1s.
Fascinating discussion.
I read every post in this thread.
My brain has officially exploded.
I will take that drink now that **robert columbia **has offered to buy a round for the gallery.
ETA: I want to be a Quantum Mechanic one day.
For anyone who’s not yet seen it, Buffon’s needle “trick” for estimating pi might be interesting.
Hm, I just can’t find it. Oh well. Here we go:
First, let’s consider the general problem of determining the volume v(K) of the region where f[sub]1/sub + … + f[sub]N/sub < K. Let us define new variables y[sub]i[/sub] = f[sub]i/sub; by standard change of variables, the problem then transforms to integrating over the region y[sub]1[/sub] + … y[sub]N[/sub] < K the product of g[sub]i/sub, where g[sub]i[/sub] is the derivative of the inverse of f[sub]i[/sub]; in other words, the derivative of v is the convolution of the derivatives of the g[sub]i[/sub]. (Let us refer to this sort of relationship by saying v is the “donvolution” of the g[sub]i[/sub])
For convenience, we may also suppose that all the g[sub]i[/sub] take input zero to output zero, and in general act on only nonnegative inputs and outputs, and thus so does v.
Let us now return to our problem for the specific case where g[sub]i/sub = x[sup]p[sub]i[/sub][/sup]. (In my previous post’s phrasing, we considered negative inputs, slapping an absolute value on them, but we can just as well ignore them by symmetry considerations)
Our goal, then, is to understand donvolution of such functions x[sup]p[/sup].
First, note that as convolution is commutative, associative, and linear in each argument, so is donvolution.
Note also that convolving a function with 1 (restricted to nonnegative inputs) is as good as integrating it from a starting point of 0; this means donvolving a function with x is as good as integrating it from a starting point of 0.
Thus, the n-fold donvolution of x with itself is the result of integrating the identity function n times, x[sup]n[/sup]/n! [that this is the n-fold integral of the identity corresponds to the basic power rule that x[sup]n[/sup] integrates to x[sup]n + 1[/sup]/(n + 1)]. Which means x[sup]n[/sup]/n! donvolved with x[sup]m[/sup]/m! must be x[sup]n + m[/sup]/(n! m!). And more generally, the donvolution of various x[sup]p[sub]i[/sub][/sup]/p[sub]i[/sub]! will be x[sup]sum of p[sub]i[/sub][/sup]/(sum of p[sub]i[/sub])!.
Which, put another way, tells us that donvolution of various x[sup]p[sub]i[/sub][/sup] will be x[sup]sum of p[sub]i[/sub][/sup] divided by the multinomial coefficient (sum of p[sub]i[/sub])!/(product of (p[sub]i[/sub]!)). This gives us our v(x), completing our desired result.
At a skim, they seem to be doing some needlessly complicated integrals to explain the formula here. A much better explanation is like so:
Suppose you have lines spaced apart at some distance, toss a figure F onto this at random (i.e., with uniformly random orientation and position), and are interested in the expected number of crossings of the lines with F.
By linearity of expectations, we can break F down into lots of tiny straight bits, and simply add up the expected number of line-crossings for each of these tiny straight bits. Straight bits of the same tiny length can only differ in orientation and position, but since we toss with a random orientation and random position, that difference is obviated too. Thus, the expected number of line-crossings for F is simply proportional to the number of tiny straight bits within it; i.e., proportional to the length of F.
Now, if we pick F to be a circle whose diameter is precisely as large as the distance between line-crossings, we will find that there are always precisely 2 line-crossings. Thus, we know that in general, for any figure F, the expected number of line-crossings is 2/(circumference of circle with diameter equal to line-spacing) * the length of F.
If we now consider instead the case where F is a straight needle short enough that it almost certainly doesn’t cross lines more than once, we get the usual formula: the expected number of line crossings (which is just the same as the probability of line-crossing) is 2/π times the ratio of needle length to line-spacing.
No tricky computation necessary!