Why the natural log?

I’ve been wondering this for years: Why do we use a base of e (2.71828…) for so many logarithms? I’ve seen it used often in biology and the social sciences, my husband uses it in econ/finance, and I’m sure it’s used in other fields as well. What is it about e that makes it so special and useful? Why is it better than using some more obvious, accessible number like 10, or 3, or something like that as a base?

My math background: I’ve taken two semesters of calculus years ago, did reasonably well, and immediately forgotten all of it. I’ve taken basic statistics more recently, and still remember much of that. Currently, I’m suffering through a class on SAS, where, interestingly, the log command will automatically assume you mean natural logs; the professor doesn’t even know how to command it to use a different base and was surprised when we asked him about it.

There are many, many useful properties of e as a base, but here’s the most obvious one: the derivative of b^x, with respect to x, is ln(b) * b^x. Thus, with a base of e, we see that the derivative of e^x is e^x itself, which won’t hold for other bases.

Oh, and as for how to get logs in different bases if you only have access to natural logs, you may recall that the base b log of x is equal to ln(x)/ln(b).

Yeah, but that holds for any pair of bases: log[sub]a[/sub]x = log[sub]b[/sub]a x log[sub]b[/sub]x (I think I got that the right way around). It’s just that a happens to be e in this case. :slight_smile:

The de[sup]x[/sup]/dx = e[sup]x[/sup] thing is what I was trying to remember.

Oh, yeah, I wasn’t trying to point this out as another example of a special property of e, just as a useful way to compute other logs if (as the OP said was the case in some situation) you only have the natural log available.

OK, but why is that useful for calculating interest rates, or risk ratios within a population?
Also, why is that true? What magical property of e makes it have itself as its derivative?

It’s also because the integral of 1/x is the natural log. In any other base, you’d have a multiplicative factor.

An excellent book on e, in the style of Petr Beckman’s classic book on the history of pi, is e: The Story of a Number:

Well, by some perspectives, e is defined as the particular number having that property.

As for why e is useful for calculating interest rates, let’s consider continuously compounded interest. Suppose you have a yearly interest rate of r [so that if you have M dollars in your account, then wait a year, and then calculate interest, it’ll come out to (1+r)M dollars]. Suppose you compound interest periodically, n times a year. If we consider the interest rate of each of these periods to be 1/n times the interest rate of the whole year, then your money will multiply by (1+r/n) at each compounding period. In x years, you’ll have gone through xn compounding periods, and your money will have multiplied by (1+r/n)^(xn). Which is equal to ((1+r/n)^n)^x.

Now, as n gets larger and larger, to approximate continuously compounded interest, what happens to this ((1+r/n)^n) term? Well, the term will actually approach e^r [another crazy useful fact about e]. Thus, with continuously compounded interest, after x years, your money will have multiplied by (e^r)^x.

If you want me to demonstrate just why ((1+r/n)^n) should approach e^r as n gets larger and larger, I suppose I can spell that out too, but this should at least shed some more light on the usefulness of e till then.

log[sub]a[/sub]x = log[sub]b[/sub]x / log[sub]b[/sub]a

I stand corrected! :slight_smile: (It’s been a long time since I used this, but sometime soon, I’ll be getting a new copy of my old college math book and learning this stuff all over again… and then moving into new territory…)

Another possible reason is that the natural logarithm has a nice and simple Taylor expansion:

ln x = (x-1) - (x-1)[sup]2[/sup]/2 + (x-1)[sup]3[/sup]/3 - (x-1)[sup]4[/sup]/4 + …

As far as I know, no other logarithms expand anywhere near as cleanly.

I don’t know how calculators typically implement logarithms–I imagine they’d use an iterative method that converges faster than the Taylor series–but I wouldn’t be surprised if whichever algorithm is used also calculates natural logarithms by default, with other bases requiring additional calculations.

Well, of course, other logarithms expand out to the same Taylor series, times a constant, thanks to the change of base formula. (Specifically, the base b logarithm would expand out to the same series times 1/ln(b)).

Still feels the cleanest when the coefficients are the alternating harmonic series, I agree, but I wouldn’t consider it terribly computationally advantageous or anything.

e[sup]x[/sup] also has a very clean power series: e[sup]x[/sup] = x[sup]0[/sup]/0! + x[sup]1[/sup]/1! + x[sup]2[/sup]/2! + x[sup]3[/sup]/3! + … = Sum(n=0…infinity, x[sup]n[/sup]/n!).

Base-e also shows up naturally in situations with a characteristic growth or decay rate, like radioactive decay, or population growth. Suppose I look at such a system for a very short amount of time, t: The amount by which the number n of atoms (or population, or whatever) changes in that short time is equal to n**t*/tau, where tau is a constant relating to that system. If I wait for a time tau, then n has changed by a factor of e (the interest example is another case of this).

The natural exponential is also closely related to the trig functions via Euler’s equation, e[sup]ix[/sup] = cos(x) + isin(x) (which also, incidentally, demonstrates the value of the use of radians for angles). As a famous special case of this formula, e[sup]i*pi[/sup] = -1.

Just thought I’d add:

Hope you stick around, Indistinguishable, you’re pretty good at explaining math in an easy-to-follow way. I may be asking a few questions of my own for you soon.

Aw, shucks… Thanks.

Yes, but why is that? Why is it that the natural world seems to reflect this number e all the time?

e pops up a lot where the rate of change of something is proportional to the amount of that something. Why is that the case? Well, lets look at the math. Say we have x amount of something, and the rate of change of x is proportional to x. The equation for that is:

dx/dt=A*x

Separating variables:

dx/x=A*dt

Integrating:

ln(x)=A*t + constant
solving for x:

x=e^(At+constant)

If you remember your exponential rules, e^(a+b)=e^(a) * e^(b) so we get:

x=C*e^(At)

This is an important result because a ton of things change at a rate proportional to the amount of that thing. For example, cell division occurs at a rate proportional to the number of cells, capacitors discharge at a rate proportional to the amount of charge on the capacitor, and interest accumulates proportional to the amount of money in the account. So that’s why e shows up so much, as for why e, you have to go back to the definition of a derivative. The math gets a bit complicated here, but you should have seen it before.

Before I start, lets be clear about what I am going to do. We have a generic function where the rate of change of something is proportional to the amount of that something. In mathematical terms that means dx/dt=Ax, where A is just some constant. Separating variables gets us dx/x=Adt. What I want to find is what function of x has a derivative equal to 1/x.

The derivative is defined as:

The limit of [f(x+h)-f(x)]/h as h approaches 0.

Let’s just say we know that the function we are taking the derivative of is a log, but we don’t know it’s base yet. Our derivative is then:

[log(x+h) - log(x)]/h as h goes to 0.

I’m going to drop the “as h goes to 0” at this point just because I don’t feel like typing it over and over.

Remember that: log(a)-log(b)=log(a/b), from the log rules.

We can combine the logs to get:

log([x+h]/x)/h

Doing some algebra inside the log gets:

log(1+h/x)/h

Again from the log rules: n*log(a)=log(a^n). Applying that gets us:

log([1+h/x]^(1/h))

Now it’s useful to add another variable u=h/x. Note: lim u->0 is equal to lim h->0. Plugging that in gets us:

log([1+u]^(1/[u*x]))

Using log(a^n)=n*log(a) we can get the x out:

1/x*log([1+u]^(1/u))

Remember that we are taking the limit of this as h->0, which as noted earlier is the same as u->0. Would you like to guess what the limit of [1+u]^(1/u) as u goes to 0 happens to be? If you guessed e pat yourself on the back. Let’s re-write it:

1/x*log(e)

The goal of this exercise was to find what function of x has the derivative of 1/x. Since we just want 1/x, we know that log(e) must be one. That is true only when the base of the log is e. Thus we have proven that the integral of 1/x is loge(x), or ln(x).

Let’s write our equation again and solve it:

dx/dt=A*x

dx/x=A*dt

Now we can integrate to get:

ln(x)=A*t+constant

Remember the definition of log: b^Logb(A)=A. Therefore in order to solve for x we need to raise e to both sides, and thus e appears in our final equation:

x(t)=C*e^(At)

The clearest answer to this, I think, involves differential equations,some of which are actually quite easy. Consider a couple of bunnies. Those two can reproduce, making two more. Now you have four, or two pairs, each of which can reproduce. So after two generations, you have eight bunnies, and so on. Let B(t) be a function describing how many bunnies we have after any number of generations. In this example, it’s pretty clear that B(t)=2[sup]t[/sup]. Huh, it looks like we don’t need e after all. Unless…

What if we had, instead of discrete bunnies, a Ganymedian slime mold which doubled every hour continuously? We may have no clear boundary between generations, but we can say that the rate of growth at any instant, d[GSM(t)]/dt, is directly related to the amount of slime mold GSM(t) at that instant:

d[GSM(t)]/dt = GSM(t), a very simple ordinary, first-order differential equation.

We can rearrange it, dividing both sides by GSM(t) and multiplying both sides by dt, to get
[1/GSM(t)]*d[GSM(t)] = dt

Integrating both sides is easy, thanks to the properties of the natural log mentioned by CalMeacham above, giving
ln[GSM(t)] = t

or,

GSM(t) = e[sup]t[/sup]

This is just like Indistinguishable’s example of continuously compounding interest, and indeed this particular differential equation shows up all over the place, whenever you have a growth or decay rate that is proportional to a population.

The simplest second-order differential equations (involving d[sup]2[/sup]y/dx[sup]2[/sup]) have sines and cosines as their fundamental solutions, which are not unrelated to the natural log, as Chronos notes. These give us equations for light waves from the mutual interaction of magnetic and electric fields, for example, and is, I think, the best possible argument for Platonic mathematical ideas. Perhaps my imagination is faulty, but I think a sophisticated slime mold from Ganymede would know exactly what we mean by sin(x) and e[sup]x[/sup] if we started from the same differential equations.

I thought that the natural log was also the base to describe a number of mathematical curves that express things in nature. Perhaps the most explicit example I recall is the function that describes the curve of the shell of the nautilus.

That power series is one explanation of why e[sup]x[/sup] differentiates to itself. Differentiate each term from the left, and the first term vanishes, the second becomes equal to the previous first, the third to the previous second, and so on to infinity.

Also if you take (1+(1/x))[sup]x[/sup]…

Suppose I charge you 100% interest on a loan over a year. You borrow $1m and at the end of the year you owe me $2m. Now suppose I make it 50% compounded six-monthly. Now you owe me $2.25m after a year ($1.5m after six months, and then another 50% of the whole thing after the next). Now suppose I make it 25% compounded quarterly. Now suppose I divide the year into one million equal parts (about 30 seconds) and charge you 0.0001% interest every half-minute. Now suppose I use a really fast computer to slice the year into a nearly-infinite number of slices x, and charge you compound interest at a rate of 1/x. As the number tends to infinity, your debt after a year tends to e megabucks. That’s the value the expression above tends to (taking one million dollars as the unit in this case).