Why the natural log?

Uh, this is GQ… the “what” of reality. For the “why”, it’s the Theology section, I’m afraid. The “what” is that “e exists and has all these useful properties”.

There are many instances of the use of e which aren’t as “necessary” now, with the advent of computers and of people like you who tell the computer how to use the logs for us invisibly. But as recently as my father’s generation (he was born in 1938), logs were an enormously useful mathematical shortcut. We still have Dad’s old schoolsbooks at home, but the only one I ever say him use was the log tables - and those, repeatedly. Using L rather than l10 meant two operations less, so L was used.

Eh? No, for multiplication and exponentiation it’s common (base 10) logs all the way, because they’re easily scaleable. To multiply 1234 * 5678 you only need to take the logs of 1.234 and 5.678, add 3 to each and add 'em together, then take the antilog. Natural logs are a bugger to decimal-scale.

Of course, you can always decimal-scale the result, but that’s not how I was taught to use them (and I’m a generation on from your Dad, barely, but we were still using logs in the early Seventies). I remember how for negative scaling we used this odd “bar” notation, indicating that the integer part of the log was negative while the rest was positive.

Several different mathematical operations end up giving you the number e, and you can show mathematically why those operations give you the same number, and what its value is. Then, there are many different natural phenomena which correspond to one of those mathematical operations. Which step in this is the puzzling one, the mathematical equivalence, or the correspondence between math and nature?

Yeah, I must have been one of the last to go to high school when math texts commonly had logarithm tables printed in the back, and we learned how to use them to do calculations. (early-to-mid 80s)

True, but it’s important to remember that there’s nothing special about using e as the base of your exponentials to write out such formulas: you can always rewrite one exponential base in terms of another via the equation a[sup]x[/sup] = b[sup]log[sub]b[/sub] a * x[/sup]. In fact, the exponentials used in radioactive decay are often base 2 instead of base e, to make the interpretation of half-lives easier.

It’s about time my SAS programming experience came in handy around here!

First of all, you mean the log function, not command. The log command means to invoke the log window.

Log10(x) instead of Log(x) will use Base 10. Log2(x) uses Base 2. I don’t believe there are any others.

You may find http://support.sas.com/91doc/docMainpage.jsp helpful, it has all the V9 docs.

It’s about time my SAS programming… aw, CRAP!!

Dude, Napier, don’t even worry about the SAS programming thunder-stealing: this is still totally your thread.

No, there is something special about using e as the base. If you look at the instantaneous rate of change, there’s a natural timescale that shows up in the problem, and if you wait for exactly one of those natural timescales, you will get a change by a factor of e. If you want to use any other base, then you need to introduce an extra factor to the instantaneous rate of change formula. You can, as you note, use any base you want, but the formulae will be simpler in base e.

It’s come up several times now (my and Malacandra’s posts on compound interest, and treis’s post on the antiderivative of 1/x), so it’s time to deliver on my promise of showing that (1+1/n)^n approaches e as n gets arbitrarily large. (Depending on your perspective, you may actually take this limit to be the definition of e and then prove that it satisfies d/dx e^x = e^x, but in this post, I’ll be doing the opposite, taking e to be defined by this latter property (which is, in my opinion, essentially the most natural, clean, and motivated definition for it, the one which really brings out the reasons for its special properties), and then proving it to be the limit from the compound interest problem).

First, we’ll need some preliminaries. We need to see that d/dx ln(x) = 1/x. (NB: treis showed this in his above post, but did so with (essentially) the assumption that (1+1/n)^n approaches e as n gets large; we obviously need to provide a different proof, to avoid circularity.)

Consider two variables, x and y, related by y = e^x. By the definition of e, we have that dy/dx = y.
Therefore, dx/dy = 1/y. Since x is related to y by x = ln(y), we’ve just demonstrated that d/dy ln(y) = 1/y, as we set out to.

Now, let’s consider the limit of ln(1+x)/x as x approaches 0. The top goes to ln(1+0) = 0, and the bottom obviously goes to 0 as well. This gives us the indeterminate form 0/0, so we have to apply l’Hôpital’s rule, which tells us we can replace the top and bottom in this case with their derivatives, without changing the limit. Using the chain rule and our above demonstration, we see that the derivative of ln(1+x) is 1/(1+x). And, of course, the derivative of x is 1. So, after the replacement, we get (1/(1+x))/1 = 1/(1+x). As x approaches 0, this will of course approach 1. Thus, we can conclude (via l’Hôpital) that ln(1+x)/x approaches 1 as x approaches 0.

What good is that? Well, from that limit, we can conclude (using continuity) that e^(ln(1+x)/x) approaches e^1 as x approaches 0. But e^(ln(1+x)/x) = e^(ln((1+x)^(1/x))) = (1+x)^(1/x). And so we’ve shown that (1+x)^(1/x) approaches e as x approaches 0.

This is pretty much everything we want, but I’ll go just a bit furhter. Replacing x with r/n, we see that (1+r/n)^(n/r) approaches e as r/n approaches 0. This means (1+r/n)^n approaches e^r as r/n approaches 0. Holding r constant, this is essentially the same as saying that (1+r/n)^n approaches e^r as n gets arbitrarily large. Voila, this is precisely the fact which I used above in my post on compound interest.

I think this covers almost all the special properties you’re likely to run into about e and ln. The two others noted by Chronos in post #13 (the Taylor series expansion of e^x, and Euler’s equation e^(ix) = cos(x) + isin(x) in radians, leading to the consequence that e^(ipi) = -1) follow very straightforwardly from our definition that d/dx e^x = e^x, along with the differentiation rules for sin and cos in radians, once you have some knowledge of manipulation of Taylor series.

If I could edit, I’d replace the word “furhter” near the end of the above post with “further”… but, alas.

[zombie voice]Joinnn usss… Joinnn usss…[/zv]

Mmmmmmmmmmmmm. Frankfurhter.

The correspondence between math and nature part. After reading through all of this, I understand it a bit better, but not well enough to explain it to somebody else.

1000 posts! And it’s only taken me six years!

I prefer e[sup]i*pi[/sup]+1=0, thereby getting the five most useful numbers in all of mathematics into one identity. (I think you can pretty much construct any equation using these five numbers in some combination.) And it just gets more beautiful every time I look at it.

Stranger

One last point which doesn’t address the correspondence between math and nature (I’d chalk pretty much all of that up to e’s role in solutions to simple differential equations, as illoe pointed out earlier), but which does at least address the math, is that one might ask “Why should there be some unique constant e such that e^x is its own derivative? Why should any number have that property? Or why not two different numbers both having that property?”. Well, we can approach this via the formal theory of existence and uniqueness of solutions to differential equations (and prove some stronger versions of it), but we can also prove this particular fact in a much simpler way.

Pick some b, your favorite number, 2, 10, pi, 1/2, whatever. I don’t really care what b you pick, so long as it’s positive and not 1 (so that we can take logarithms with it as a base); this proof will go through just fine with any such b.

Now consider the derivative of the function b^x. By definition, this is the limit as h goes to 0 of (b^(x+h) - b^x)/h, which equals b^x (b^h - 1)/h. We can pull b^x out of the limit, so that the derivative is b^x * the limit of (b^h - 1)/h as h goes to 0. Does that limit exist? It’s not hard to show it does, but, at any rate, I assume you’ll grant me that the derivative of b^x exists, and therefore that the limit exists. What’s interesting, though, is that the limit term contains no references to x, so its value must be some constant depending only on the base b. Call this constant K. We have that the derivative of b^x is K * b^x.

Now, how about the derivative of c^x for some other base? Well, c^x = (b^log_b(c))^x = b^(log_b(c) * x), where I use log_b to denote the base b logarithm. By the chain rule and the work we just did to find the derivative of b^x, we can conclude that the derivative of b^(log_b(c) * x) is log_b(c) * K * b^(log_b(c) * x). Since c^x = b^(log_b(c) * x), this tells us that the derivative of c^x is log_b(c) * K * c^x. And thus, there is a unique c such that c^x is its own derivative: it will be the unique c such that log_b(c) * K = 1. Specifically, the only such c will be equal to b^(1/K). Q.E.D.

So that explains why it is legitimate to define e the way we have. Which, if not addressing e’s role in nature, hopefully at least further clarifies e’s nature in itself.

Indistinguishable, you make me blush.

Though, the threads about the Pope being the Antichrist - those are the ones I really put myself into…

In analysis, the exponential and logarithmic functions are generally defined in terms of the integrals[sup]1[/sup] that have been thrown about here. Just to summarize:

exp(x) = sum( x[sup]n[/sup]/n!, 0 < n )
ln(x) = int( 1/t, 1 < t < x ) for 1 [sup]<[/sup] x; -int( 1/t, x < t < 1 ) for 0 < x < 1

Given these definitions, it’s very easy to see that d/dx exp(x) = exp(x) and d/dx ln(x) = 1/x. It’s also easy to see that both are one-to-one, and from there to prove that one is the inverse of the other. Here’s a quick proof:

Because exp(x) is one-to-one, it has an inverse function, L(x). exp(L(x)) = x, so d/dx exp(L(x)) = 1. By the chain rule, d/dx exp(L(x)) = exp’(L(x))L’(x), which is equal to exp(L(x))L’(x), and that further reduces to xL’(x). Since xL’(x) = 1, it follows that L’(x) = 1/x, and that L(x) - ln(x) is constant. Quick computation for exp(0) and ln(1) shows that L(x) = ln(x).

From here, it’s a simple matter to define cos(x) = (exp([symbol]i[/symbol]x) + exp(-[symbol]i[/symbol]x))/2 and sin(x) = (exp([symbol]i[/symbol]x) - exp(-[symbol]i[/symbol]x))/2[symbol]i[/symbol] and prove that they have all of the properties we usually associate with them.

[sup]1[/sup]Riemann integrals and series are both special cases of more general integrals.

Put that way, it’s also got one each of addition, multiplication, and exponentiation (arguably the three most important operations in mathematics), an observation which you might find to make the identity even more aesthetically pleasing.

Flowers are aesthetically pleasing. Equations like that are simply candy for the mind. :slight_smile: