Show ln x/x < 1/e without derivatives?

I recently worked a problem in which I had to show that (ln x)/x on [1,inf) is less than some number. Taking the derivative, there’s a maximum at x = e, so that wasn’t too hard.

But then I began wondering what I’d do if I couldn’t use the derivative. I couldn’t figure it out, and now it’s just nagging at me.
To phrase the problem :

Without taking the derivative, can you show that (ln x)/x is always less than some value on [1, inf) ? (Use [e, inf) if you prefer.)

Assume that ln x is either the inverse of e[sup]x[/sup] or = INT (1->x) : 1/t dt.

It seems simple enough, but I know that ln can be tricky. I figure it’s either very easy or extremely complicated.

Well, since x = e[sup]ln(x)[/sup] you could write this as
ln(x) / e[sup]ln(x)[/sup]

Is should then be fairly easy to show that when x and a are both greater than 1
x * a[sup]-x[/sup] < a[sup]-1[/sup]

It should be hard to show. Try x = 2, a = 3/2.

If you just want to show that f(x) = (ln x)/x is bounded on [1, inf), that’s not hard. Just show that for sufficiently large x > X, f(ex) < f(x). Thus you can restrict your attentions to the compact set [1,eX]; since f is defined on this set it must achieve a maximum value there, and this is then the maximum of f on [1,inf).

Finding the exact bound without calculus is probably trickier. Note, for example, that one of your proposed definitions of ln x uses integral calculus, which is probably not fair if you don’t want to allow differentiation. The other definition uses e, so you probably have to come up with a useful pre-calculus definition for e as well. This will involve, explicitly or implicitly, some sort of limit, which is getting close to calculus and so sounds a bit shady.

Interestingly, I did something along the same lines, Xema, before realizing that it wasn’t working (and did a simple test as ultrafilter showed).

Thanks, Omphaloskeptic. My efforts were concentrated on trying to show that f(x) was monotonically decreasing above some value, when I really only needed to find a bound. For the purposes of the problem I would have needed an exact bound (just for the curious, I was showing uniform convergence of f[sub]n/sub = (f(x)[sup]n[/sup]), but the whole thing is a bit silly anyway as noted.

It was just something that made me think about how some problems can be so easily solved simply by using some definitions and proofs. Of course, that’s the whole motivation for deriving those methods, but it’s one thing that fascinates me about mathematics.

Not really a rigorous proof, but here is one way of finding the maximum:

f(x) = ln(x)/x

Let theta be the maximum value of f(x) over [1,inf)

If f(x) <= theta forall x in [1,inf)
then
ln(x) <= theta*x forall x in [1,inf)

Now, theta*x is simply a straight line starting at 0 and having a slope of theta.

If you look the plot of ln(x), you will see that if we have a straight line starting at 0 with a large enough slope, it will never touch ln(x).

For just the right slope, it will touch ln(x) at exactly one spot. This “just the right slope” is the value of theta that we are looking for.

At the one spot where the two curves touch, we have

  1. ln(x) = theta*x
  2. slope of ln(x) = slope of theta*x

If we assume that we know that the slope of ln(x) is 1/x, then we have
From (2)
1/x = theta
==> x = 1/theta

and plugging back into (1), we have
ln(1/theta) = theta*1/theta = 1

==> 1/theta = exp(1) = e
==> theta = 1/e

How do you know that this exists?

I said “Not really a rigorous proof” :slight_smile:

Well, yeah. But all you’ve argued is that if f is bounded on [1, inf), the bound is 1/e. That’s not the original problem.

I think the following:

shows, in a very non-rigorous way, that f is bounded.

That is, since thetax, for a large enough theta, will never touch ln(x), and will always be above it, that means that there is a theta for which thetax > ln(x) forall x in [1,inf),
i.e. theta > ln(x)/x forall x in [1,inf)

Let’s see if I can show this a bit more rigorously

for x = 1
theta*x = theta ln(x) = 0

for x >= 1
slope of theta*x = theta, slope of ln(x) = 1/x <= 1

So, as long as theta > 1, then

  1. theta*x > ln(x) at x = 1
    and
  2. the slope of theta*x is always larger than the slope of ln(x) in [1,inf)

(1) and (2) above show that as long as theta > 1, theta*x > ln(x) forall x in [1,inf)
i.e. theta > ln(x)/x forall x in [1,inf)

So, there are values of theta that bound ln(x)/x

Of course, to show this you’ve probably used the Fundamental Theorem of Calculus; this may or may not be “allowed” by the conditions of the OP (but as I said before, the problem definition seems to be pretty closely tied to calculus anyway, so it’s not clear there’s any way around using it).

Touche.

I’ve been kicking this problem around for a couple of days, just as an exercise, and think I have a solution that does not depend on derivatives, but does depend on the fundamental definition for e:

e = lim (1+1/h)^h h->INF

and the associated knowledge of limits (e.g. as x->0, 1/x -> INF). Frankly if you’re dealing with a number like e (as the base for the natural logarithms), I think you you have to concede these basic terms.

1. (ln x)/x has a local maximum at x=e: Consider the function y = ln(x)/x. Choose two points on this curve: (x,ln(x)/x) and (ax,ln(ax)/(ax)) where a>1. Given x, what value of a will make the two y-values of these points the same (i.e. such that a line drawn between the two would be horizontal)?

Taking a>1 and x>0, the two y-values will be identical only if

ln(ax)/ax = ln(x)/x => ln(ax) = ln(x^a) => ax = x^a => a = x^(a-1) => x = a^[1/(a-1)]

Now make the variable change h = 1/(a-1) => a = 1+1/h:

x = (1+1/h)^h

Now let a -> 1 => h -> INF: By definition, x will approach e from below and ax will approach e from above. Since x and ax are approaching each other, the horizontal line connecting the two points becomes a horizontal tangent touching only one point, so x=e is an extremum; in fact it is a local maximum since the surrounding values ln(1)/1=0 and ln(e^2)/e^2=2/e^2 are both less than ln(e)/e.

**2. (ln x)/x is bounded from above: **It is relatively simple to show that ln(x+1) <= x by appealing only to the definition of e and limits: Take the function ln (x+1), draw a line connecting (x, ln(x+1)) and (x+a, ln(x+a+1)) where x>0 and a>0. The slope of this line must be (1/a)*ln((x+a+1)/(x+1)), which for fixed a>0 decreases with increasing x. This indicates that for two finite line segments of this type with fixed x-axis width a, the one closer to the origin always has a larger slope. If these two segments share a common endpoint (where one ends the other begins), this means they always meet at an angle pointing “upwards” on the graph, hence the function ln(x+1) is concave downward at every point.

Draw the specific line between (0,0) and (x, ln(x+1)). The slope of this line must be ln((1+a)^(1/a)). Making the variable change h=1/a gives ln((1+1/h)^h), and as above the argument of the logarithm tends toward e as h->INF (i.e. as a->0). Thus, this line becomes a tangent at (0,0) when its slope reaches 1. Because the curve is everywhere concave downward, the tangent cannot cross back over ln(x+1), so every y-value on the line y=x is equal to or greater than the corresponding value of y=ln(x+1) => x>=ln(x+1) (equal only when x=0). This of course implies that ln(x+1)/x is bounded from above for all x>-1 (excepting for the moment x=0), and a fortiori (ln x)/x is bounded for x>0.

Thus, we have proven that ln x/x has a local maximum at x=e, and that it is bounded from above for all values. Therefore, x=e produces the maximum value. QED, without derivatives. Perhaps a pointless exercise, but there it is…

And then, of course, there’s the intuitive argument: At x=e, ln(x)/x=1/e; if x<e, ln(x) will be negative, and thus ln(x)/x must be <1/e; if x>e, ln(x) will increase more slowly than x will (logarithmic vs. linear), and thus ln(x)/x will decrease.

You were in my calculus class this semester, weren’t you? Where do you kids get these ideas?

The natural logarithm of e is 1, so you’re saying it just jumps down past 0?

First of all, logarithmic-vs.-linear is a statement about how the functions behave at infinity, but says nothing about their behavior at any finite value. Secondly, you’re just reading in a statement that usually takes derivatives to prove, and thus doesn’t satisfy the requirements of the OP.

Not quite. If x < e, then ln(x) will be less than 1. If x < 1, then ln(x) will be negative. So you need some other argument to show that ln(x)/x < 1/e for 1 < x < e.

For that matter, the bit about ln(x) increasing more slowly than x will is only a statement about the behaviour of those two functions in the long run, but there could be another local maximum some time before infinity. After all, ln(x) increases more quickly than x, for sufficiently small x. But to say where it goes from increasing more quickly to increasing more slowly, you need to use derivatives.

The best that you can do without limits is:

ln(x) and e**x are inverses, hence they are symmetric about the line y=x. Hence, for any x in Dom(ln), we have that

e**x > x > ln(x) [1]

Since Dom(ln) = {x| x>0}, we can divide [1] through by x:

(e**x)/x > 1 > ln(x)/x [2]

Since e itslef is defined as a limit:

e = Lim{n->inf+}(1 + 1/n)**n, you’re going to have to work with limits to get a more precise result.

[QUOTE=cerberus]
The best that you can do without limits is:

ln(x) and e**x are inverses, hence they are symmetric about the line y=x. Hence, for any x in Dom(ln), we have that

e**x > x > ln(x) [1]

[\QUOTE]

nln(x) and e**(x/n) are also inverses and therefore symmetric about the line y=x. But for n large enough nln(x) > e**(x/n) for some values of x, e.g. for n=2e and x=e, n*ln(x) = 2e and e**(x/n) = e^(.5).

The trickiness of the original problem is in the assumptions it exposes; at first glance I also dismissed it per dwalin’s idea of “logarithmic vs. linear”, but then came to realize these terms only describe the behavior of these functions for x sufficiently large, and as Chronos points out a term like “increases more slowly” hides the calculus needed to justify that statement.

It may be impossible to actually solve the problem without using some calculus (in my own solution, the argument about ln(x+1)<=x skirts very close to calculating an actual derivative, YMMV), but the effort in doing so really exposed some low-level mathematical thinking I would usually gloss over. The problem proves the old adage that the journey can often be more fun than the destination.

Clearly, since the problem involves e, and all definitions of e involve some sort of limit. I would maintain that even if they’re not derivatives or integrals, limits are inherently a feature of calculus. Of course, you can do it all without ever using the term “derivative”, but then you have to raise the question of whether you ever use anything equivalent to derivatives without naming them. Usually, this ends up being a lot more tedious, anyway.