High-school calculus question - do I have enough information to figure out the function?

On the off chance I’ll understand the answer: how could a function with properties I described in the OP not be differentiable at the local extremes? For that matter, could a function that is defined at a given coordinate be undifferentiable at that point?

The classic example of an non-differentiable function defined on all the reals is the absolute value function. e.g. y = |x|. What is the derivitive when x=0? It’s undefined.

There are lots of other such functions. Least integer is another, which opens the whole class of functions with non-continuous y values which are still defined over all x.

To be sure, if by “function” you really mean just simple polynomials in the form ax[sup]n[/sup] + bx[sup]n-1[/sup] + … + zx[sup]0[/sup] then it gets harder to find non-differentiable functions for low order differentials.

And for a (non-differentiable) function where the local extrema aren’t points of changing sign of derivative, you could consider a sawtooth function, like f(x) = x - floor(x). Everywhere that the derivative exists, it’s equal to +1, but there’s a local maximum (and a local minimum) at every integer (where the derivative is undefined).

EDIT:
Oh, and if you want to get really weird, it’s possible to have a function f(x) that’s continuous at every irrational value of x, but discontinuous at every rational number. And one could probably construct a similarly-pathological function for differentiability instead of continuity, but that stretches my brain a bit too far.

Well, impossible. Polynomials are always infinitely differentiable.

Local minimum, sure. But why do you say local maximum? At an integer, x - floor(x) = 0; nudging x a little higher would cause this to get a little higher, and nudging x a little lower would cause this to get a lot higher (the function being discontinuous with respect to limits of inputs increasing towards an integer).

The textbook definition of the derivative of f at x as (f(a) - f(x))/(a - x) as a approaches x allows for a number of needless pathologies of little interest outside the deliberate study of pathologies. I often prefer to think in terms of the “strong” derivative given by the limit of (f(a) - f(b))/(a - b) as both a and b approach x. [If this “strong” derivative exists, then so does the textbook one, and they coincide; however, in some cases, the textbook derivative exists but only by virtue of ignoring various paths of approach which cause the “strong” derivative to fail to exist]. In particular, the “strong” derivative is never discontinuous; if it is defined throughout an interval, it is also continuous throughout that interval.

Yeah, I was worried that I might trip over a few definitions, there. Would you prefer if I said “local supremum”, instead?

On the standard account of things, a function can certainly be undifferentiable at a point where it is defined. There are various ways this might happen.

One is because the derivative actually blows up to positive infinity or negative infinity, and people like to think of these as not proper numbers. (Thus, people say, for example, that the cube root function is not differentiable at 0, even though there is a clear sense in which its derivative there is a well-defined positive infinity). That will not actually present any problem for the reasoning of this thread.

Another way is because the rates of change of the function in different directions don’t match up. Thus, for example, the absolute value function at 0 has a “rightwards slope” of 1, but a “leftwards slope” of -1. There is a well-defined rate of change in any particular direction, but they don’t all happen to match. In terms of this, for a function to have a local minimum(/maximum) at a point is to say that the function is increasing(/decreasing) as one moves away in any particular direction.

Finally, the possibility exists that the limit which would ordinarily define the derivative fails to settle down to any particular value, oscillating too wildly.

As you are trying at this point just to learn calculus and not, say, the deliberate study of pathologies that falls under the name “analysis”, there is not much point in your worrying about this sort of thing. It is a distraction you are unlikely to find much use for or interest in (though, of course, you are always free to choose your own interests…). You can certainly choose to work within a framework in which the only functions you consider are the ones which are as smooth as you like (with rare occasion to consider functions separately smooth on separate parts of their domain, like the absolute value function).

Er, no; I still don’t understand what you mean. x - floor(x) is 0 at integers, while just above integers, it is just above 0, and just below integers, it is just below 1 (and thus much above 0). Its value at an integer is not the supremum of its values in the immediate vicinity of that integer; every other value in that vicinity is, in fact, higher. So why would you call it a “local supremum”?

ETA: Ah, I think perhaps you are thinking of the function as taking on every value in [0, 1] simultaneously at integers? So that it’s not actually the function x |-> x - floor(x) on the standard definition of floor, but rather, the closely related multi-valued function whose graph really is a continuous sawtooth? In that case, all is understood; the highest value reached is 1, which happens at integers, and the lowest value reached is 0, which also happens at integers.

I mean that, as you move along the function from 0 to 1, you’ll be going up a hill, and that, while you’ll never actually reach a height of 1, you’ll get arbitrarily close to that height. So the function has a supremum of 1. Where does it reach that supremum? Well, never, of course… but the closer you get to an argument of 1, the closer the value of the function gets to 1. So (depending on definitions) the function has no local maximum, but to the extent that it has any point that resembles a local maximum, that point is an integer. In fact, one (at least, one who isn’t an hour past one’s bedtime) could surely construct an alternate definition of “local maximum”, which would agree with the standard definition for any continuous function, and under which definition the integers would be local maxima of the sawtooth function I described.

Addendum: It might clarify to realize that I’m approaching this from the perspective of a physicist, and that physicists never actually deal with numbers, but only with neighborhoods of numbers. Thus, for all practical purposes, floor(x) = ceil(x) - 1 , since they agree everywhere except on a set of measure zero.

And if you force a physicist at gunpoint to assign a value to a function at the location of a jump discontinuity, the physicist is likely to put the value at the midpoint of the jump, under which convention a sawtooth function has neither true maxima nor true minima.

Ah, ok. This seems about on track with saying the function should be thought of as multi-valued, taking both values 0 and 1 at integers as those are both limiting values of the function at integers.

Thank you all for the ongoing answers. I’m glad I didn’t miss something about the functions I’m working with, but look forward to learning about the more exotic ones you’re talking about!

I’ve been thinking about y=∣x∣ and why it is different from y=x^2. (I’m not arguing it isn’t, I’m just trying to think of why it is.)

The slope of any point other than (0,0) of y=∣x∣ is 1 or -1. Then things go to hell at (0,0). I’m trying to figure out why that is different from the point (0,0) on y=x^2, where the slope is defined as 0. In many superficial ways, that point in the two functions has similar properties - it’s a tangent to the same point, in both cases, that point represents a vertex or at least a similar change from a negative slope to a positive slope, and both functions have similar contours - but one has a v shape whereas the other has a u shape. I can, after all, draw the same tangent at that point in both functions, and they certainly look similar. (Whereas, with an asymptote on, oh, 1/x, you can see that the function is not defined right there.)

Yes, I see that with y=x^2, the same math applies to the derivative at (0,0) as to any other point, whereas that is not the case with y=∣x∣. But the fact remains, I can draw the same horizontal line at the base of y=∣x∣ as I can at y=x^2. It seems odd to me that I can graph something that looks like it should be defined, but isn’t. I guess that with math, you can’t have “i before e except after c” rules - so you can’t say y’=∣x∣ is 1 or -1 except at (0,0) where it is 0.

When you say a horizontal line is “tangent” to the graph y = |x| at (0, 0), in what sense are you using the word “tangent”?

Note, for example, that lots of different lines through (0, 0) are “tangent” to the graph of y = |x| in the sense of not crossing through the graph. For example, y = x/2, or y = -x/2; in general, y = kx for any k between -1 and 1.

Gotcha! (Sorry - Gotcha ya!). Hadn’t thought of that. It is undefined because there’s an infinite number of them rather than none of them? So far in the math I’ve done, undefined has always meant something can’t happen rather than something can happen too often.

That depends on how you’re defining “tangent line”. Under some definitions, it’ll have an infinite number of them, while under others, it’ll have none at all. But there are very few definitions under which it would have exactly one.

Given three points you should be able to construct one polynomial where these are two maxima and a minima.

As others have pointed out the flaws - you also want assurances this is a smooth continuous function. (another detail which can be considered data) Is it definitely an x^4 polynomial? That’s another detail.

I don’t know enough trig etc. to say whether it could be a non-polynomial, but certainly a sine wave overlaid on a sine wave (or other function) could be manipulated to produce those same max and min (and additional ones) and so can higher-order polynomials.

The derivative function has three x=0 points, giving you a function in x^3 that (as the spoiler points out) can be turned into an solution in x^4; the constant is what determines the relative maxima/minima; but presumably there are higher-order functions that can also give these results.

D18, which calculus text are you using?

LSLGuy, WTF? You don’t see anybody else in this thread flying heavy metal, do you?

Chronos, that makes sense, thank you. As I learned in your post on sums in the sum of all natural numbers = -1/12 thread, math sometimes has a surprising number of Clintonesque moments. It depends on what the meaning of the word “is” is, indeed! :wink:

Leo, I’m just using some PDFs of a high school course I got passed along to me. And dude, it may be a woosh, but why are you calling out LSLGuy? His post was really helpful!

My guess at what the problem is getting at FWIW
Let’s assume the quartic is ax^4 + bx^3 + cx^2 + dx +e, the derivative is the cubic 4ax^3 + 3x^2 + 2cx + d.
Find the coefficients so that the roots of the cubic match x’s where the derivative is zero which is pretty easy. If the x’s are ijk then the cubic is n(x^3 - (i+j+k)x^2 + (ij+ik+jk)x - ijk) [I hope I did that right] where n is any real number.
This means in the original quartic set a = n/4; b = -n(i+j+k)/3; c = n(ij+ik+jk)/2; d = -n(ijk)

But from there, how do you pick n and e to ensure the quartic goes through the given points?

It doesn’t matter what n and e are. The problem gave me the three x values of the local extrema, but n and e only affect the y values, so they can be set at arbitrary values. Being lazy, I selected n = 1 and e = 0.

I used your formula to work out the question, and I got the correct graph, but reflected in the y axis, so one of us flipped the signs. It’s probably safe to say I did! :wink: