“One of the properties of a minimum (of a function) is that if we go away from the minimum in the first order, the deviation of the function from its minimum value is only second order. At any place else on the curve, if we move a small distance the value of the function changes to the first order.”

What does he mean by first order and second order?

For instance if f(x) = x[sup]2[/sup] the minimum is at x = 0. How would first order/second order apply to this?

In this case, first order means proportional to x, second order means proportional to x[sup]2[/sup]. In general, it’s whether the change in the function is linear (first order) or quadratic (second order) in distance from the point.

In addition, try explaining a model to non-math people that goes beyond second order or even, heaven forbid, doesn’t use polynomials.

Their heads explode.

I even had one executive force me to use a linear model though it wasn’t nearly as good the further you went from the defining point. I guess it was his money…

Feynman is right but only for analytic functions. And if more than one derivates vanish, the order goes up. If the first and second derivative vanish, then the deviation is third order.

Suppose the function is a power series (that is what analytic means, BTW) at a point x_0, say f(x) = \sum_{i=0}^\infty a_i(x - x_0)^i. Then a_1 = f’(x_0) and that is 0 if and only if the function grows away from a_0 at a rate proportional to x^2 (plus higher powers of x). And the second derivative is 2a_2 and this is 0 if and only if the function grows away from a_0 at a rate proportional to x^3 plus higher powers. And so on. That is what Feynman meant.

After some web searching I found something that I might understand. If you expand f(x) in a Taylors series about the point (x0 + epsilon) where x0 is a minimum and the slope is 0 then epsilon shouldn’t change f(x) to first order. It won’t start having an effect until you get to 1/2 f’’(x) eps[sup]2[/sup] or second order. Does this make any sense?

Take, as an example, f(x) = x[sup]2[/sup] - 2x.
Then f’(x) = 2x - 2 and f’’(x) = 2.
Thus, f has a minimum at x = 1. Replace x with 1+[symbol]e[/symbol], so we are considering points near the minimum. Then
f(1+[symbol]e[/symbol]) = (1+[symbol]e[/symbol])[sup]2[/sup] - 2(1+[symbol]e[/symbol])
= (1+2[symbol]e[/symbol]+[symbol]e[/symbol][sup]2[/sup]) - 2(1+[symbol]e[/symbol])
= -1 + 2[symbol]e[/symbol][sup]2[/sup]

The difference between f(1) and f(1+[symbol]e[/symbol]) is second order in [symbol]e[/symbol], i.e. it depends on [symbol]e[/symbol][sup]2[/sup] and higher powers of [symbol]e[/symbol]. As an exercise, try the same procedure but work with 2 and 2+[symbol]e[/symbol]. You should find that the difference includes terms in [symbol]e[/symbol] this time. This is the difference between first and second order.

Well, I don’t know how to get displays like Jabba’s. I used the near-universal markup language that all mathematicians use. It is called tex (or TeX). \sum is a summation sign, _ marks subscripts and ^ superscripts. \infty is infinity and the rest ought to be clear.

I should have added that the function x^{3/2} (the 3/2 power of x) has a minimum at 0, but has degree only 3/2. It does not have a power series expansion at x = 0 (although it does everywhere else).

The other side of the coin is when I (as an engineer) worked for a theoretical physicist. He wanted to use all sorts of exotic functions when a linear model worked just fine (r[sup]2[/sup]=.9995)

Oh to meet people like that…sigh. Hasn’t happened yet.

The only time I’ve met someone who does this is when I hire someone with little or no experience. We try to use linear whenever possible unless non-linear is much better. Sometimes not even then