Smart Ass Question on the Deriviative of f(x) = |x|

In the math textbooks I’ve banged my head on, the absolute value as a function is defined thusly:

If x >= 0, then |x| = x. If x < 0, then |x| = -x

Which is all fine and dandy, except that you can shift the equal sign over to the x < 0 without much of a hitch. If you set f(x) = |x|, then:

f’= 1 when x > 0, f’ = -1 when x < 0 and at 0 f’ does not exist (DNE)

So hey, why not define |x| this way:

If x > 0, |x| = x
If x < 0, |x| = -x
If x = 0, |x| = 0

This gives us the same derivative EXCEPT for the case where x = 0. Now the derivative DOES exist and it equals zero.

I do realize that if you graph this function, there is a discontinuity at x=0 that would, at least on visual inspection, imply that f’(0) DNE. Still, given that most (all?) v-shaped discontinuities are due to having an absolute value present in the function. So why not define the derivative of this type of discontinuity as equal to zero? Given that there seems to be an arbitrary definition of 0! as equal to one, it doesn’t seem outside the realm of logical possibility.

DISCLAIMERS:

#1: This is on the Real number line; if there is a good reason that uses Complex numbers as proof, explain away. But since I am a statistician in training, do expect me to run from this thread screaming.

#2: If there is a good mathematical reason why 0! = 1 that doesn’t involve “well, it works” I’m all ears.

No, it doesn’t. Your “redefinition” of the absolute value function doesn’t actually change the absolute value function in any way, hence it doesn’t change any properties of the derivative of the absolute value function. If f(x)=g(x) for all x, then f’(x) is defined if and only if g’(x) is defined.

First, there’s no compelling reason to do so. Second, imposing such a definition would require re-interpreting the meaning of the derivative substantially. One of the main points of the derivative is that at the point x=a, the linear function f(a)+f’(a)*(x-a) should be the best linear approximation to the function f(x). But when f(x) is the absolute value function, there is no “best linear approximation” at the origin.

Your proposed definition would also break certain theorems, I think. If memory serves, under the current definition of derivative, if f’(x) is defined everywhere then f’(x) must be continuous, which is a useful fact. Defining f’(0)=0 for the absolute value function would break that.

I don’t understand how the derivative is 0 at 0 with your definition. Derivative can be defined in a lot of ways but usually is it defined along the lines of
(f(x+dx) - f(x))/dx or perhaps (f(x) - f(x-dx))/dx
with the value converging to something as dx gets very small. With the absolute value function the first one converges at 1 and the second converges at -1 at x=0. With smooth functions these two way of defining derivative converge to the same value.

I don’t think defining the function in this way would actually change the derivative at x = 0. The derivative of f(x) is the limit as y -> 0 of (f(x+y)-f(x))/y. Even as you’ve defined the function above, the limit is still not defined since you get a different value as y approaches zero from above than you do as y approaches zero from below.

The derivative of a piecewise function at a certain point isn’t necessarily equal to the derivative of the function that makes up the piece containing that point. For instance, consider f(x) defined by:
f(x) = 1 if x >= 0
f(x) = -1 if x < 0

You can’t just find f’(0) by taking the derivative of 1 at x=0. If you did that, you’d get f’(0) = 0, when in fact f’ is undefined at x = 0.

0! = 1 is consistent with the identity [symbol]G[/symbol](n + 1) = n!, so there’s a little bit more to it than just hand-waving.

Even with your definition of |x|, the derivative at 0 still fails to exist. Remember that the derivative of a function f is given as the limit as x approaches c of (f(x) - f©)/x - c. With c = 0, f© = 0 and the limit is f(x)/x. As you approach x from the right, f(x) = x, so the limit is 1. As you approach x from the left, f(x) = -x, so the limit is -1. Same deal: you got no derivative at zero.

Note: I am not a mathematicain. I just started calc bc, so I’m just making some good sounding guesses here.

Well, a derivative is the slope of the tangent line. At the point of y = |x|, any line with a slope between 1 and -1 (exclusive) would be tangent. So, it’s undefined.

Either combinations, or permutations, or maybe both (I don’t really remember) wouldn’t work with 0! being equal to anything else. Which is basically “well, it works” rephrased. Oh, well. I gave it a shot.

Ack, that’s wrong! The standard counterexample is: f(x)=x[sup]2[/sup]*sin(1/x) when x is not 0, and f(0)=0.

[smacks self…bad mathematician!]

However, while refreshing my memory on my old real analysis textbook I was reminded of a better example: the Mean Value Theorem. If we define f’(0)=0 for the absolute value function, then f(x) is defined and differentiable everywhere on the interval [-1,2], but there’s no point c in the interval satisfying f’(c)=(f(2)-f(-1))/(2-(-1)). The Mean Value Theorem is sufficiently important that breaking it is good enough reason not to define f’(0)=0 for the absolute value function. (And it has the advantage of being true, unlike my last example.)

(I always forget about x[sup]2[/sup]*sin(1/x).)

If you have n items and you want to pick all of them out one at a time, then the number of ways you can do that is n!.

If you have 2 items, there are 2 ways to pick all the items (pick the first then the second, or the second then the first). 2! = 2.

If you have 1 item, there is 1 way to pick all the items (just pick the only item). 1! = 1.

If you have 0 items, there is 1 way to pick all the items (sit back and do nothing). 0! = 1.

Keep making those good guesses. You’ll go far.

This is something you should know, to be a statistician. And the complex numbers thing. Frankly, if you’re a statistician who’s afraid of complex numbers then I don’t want you calculating anything mission-critical for me. That said.

Consider the integral over t from zero to infinity of e[sup]t[/sup]t[sup]x[/sup]. Integrate by parts (I for integral sign).

Gamma(x) =
I[sub]0[/sub][sup]inf[/sup] e[sup]-t[/sup]t[sup]x[/sup] dt =
I[sub]0[/sub][sup]inf[/sup] e[sup]-t[/sup]xt[sup]x-1[/sup] dt =
x * Gamma(x-1)

Now, Gamma(1) can be calculated to be 1. So, for x a natural number, Gamma(x) = x!. You can calculate Gamma(0) yourself. If you don’t know the Gamma function’s role in as a distribution, you’re nowhere near a statistician.

edit: last sentence to new paragraph, “role in as” to “role as”.

OK, you got me, I’m a student & haven’t had any grad courses in stats yet. But thank you everyone for the thread, it’s a good review for the basics of derivatives, in which I keep mistaking technique for proof. The mean value theorem & two-sided limit approach cleaned some of the rust off my brain.

FTR, I joke about my fear of complex numbers, it’s just that complex analysis kicked my ass last semester. Of course, that had a lot to do with the fact that it was also the same time that my BIL was dying of cancer, and a huge amount of lesser disasters and heartbreak. I’ve been off math for a year helping to getting my family back together.

Seriously speaking, I do want to get into complex analysis again at some point, I do find it interesting. And that being said, I vaguely remember some reason that you can’t construct a >2D normal curve as a complex function, or am I hallucinating again?

0! is 1, because that’s how all other factorials are defined. For all positive integers n, n! = n(n-1)!

So for example, 7! is 7 * 6!
But then we need to know 6! which is is 6 * 5!
But then we need to know 5! which is

But then we need to know 1! which is 1 * 0!
But then we need to know 0! which is 1, because it’s defined that way, because if it wasn’t defined that way, then no other factorials would make sense.

Ummmm . . . stupid question here. How is that different than the regular definition of absolute value?

Not quite. The original definition, though not phrased recursively, was more along the lines of (n + 1)! = (n + 1)n! for all positive integers n, and there was no need for 0! to be defined. 0! = 1 came later after it was found to be very useful in things like the binomial theorem.

Most low-level books will probably give the definition you use, but I always prefer to write my recursions as above to avoid having to give so many restrictions.

Well, it’s not. The OP thought there was a difference between:

If x > 0, |x| = x
If x < 0, |x| = -x
If x = 0, |x| = 0

and

If x >= 0, |x| = x
If x < 0, |x| = -x

However, he was incorrect. If two functions have the same value at every point in their domain, they’re the same function.

Thus it should be no surprise that the derivative is the same. The OP only thought otherwise due to a misconception regarding how derivatives are calculated.

Well, no, more like I was playing with the definition for shits & grins since the “official” way will lead you to only two derivatives whereas with the putzed with case you can ask “hey, what if we set f’(0)=0? What rules would that violate?” (erm, even if I didn’t quite ask it that way). I blame it on my late night sessions reviewing calculus so’s I don’t spend my first day in real analysis drooling on my desk.

Oh, and FTR I’m technically female, but will answer to either gender.

You can’t just “set” values for the derivative. It’s a specific function, completely determined by f and defined only where f is differentiable. You can construct a function g(x) like you want, but the best you can then say is that g(x) = f’(x) where both are defined.

Yes, and the value of f’(x) at a particular point (like at x=0) depends on not only what’s going on at that point but also on what’s going on right around that point (in a neighborhood or open interval containing that point)—because the defniition of the derivative involves a limit, and that limit has to be the same from the left and from the right. (This is directed to the OP and not Mathochist, who I’m sure is fully aware of it.)

Changing the formula you use to define a function—that is, using a different but equivalent definition that gives all the same values—doesn’t change the function. It’s still the same function with the same derivative (or lack thereof).
As for the 0! = 1 thing, that’s already been explained, but just to make it really really obvious, here’s one more explanation:

1! = 1 (no controversy there).
Multiply this by 2 and you get 2! (1! * 2 = 2!)
Multiply this by 3 and you get 3! (2! * 3 = 3!)
And so on. (You can keep this up as far as you want.)
Now, go in reverse:
Divide 3! by 3 and you get 2!
Divide 2! by 2 and you get 1!
Divide 1! by 1 and you get 0! To fit the pattern, 0! = 1.

A similar argument shows why a[sup]0[/sup] = 1, not 0, when a is a nonzero number.

Remember, factorials and exponents have to do with multiplication, and when it comes to multiplication, the “starting point” is 1, not 0.