In the current course of my research, I’m dealing with a particularly ugly function (ugly enough that I’m not going to post it here). To make it a little friendlier to deal with, I’m actually working with a power-series expansion of it. I’ve taken the expansion out to second order, which seems to be adequate for my purposes. However, I’ve noticed that the first-order results seem to be better than the second-order. One of my committee members said that he thinks this is not uncommon for some power series; some series, he said, will only improve in accuracy at the odd-order terms, not the even ones. In other words, if I were to take my expansion out to third order, I would expect my results to be better than the first order, but if I went to fourth order, the results would again get worse (but presumably still better than the second-order).
So, on to the questions. First of all, is this kind of behaviour known in power series expansions? Second, and most importantly, if it is known, what’s it called (if I know this, I can search Google or Mathworld for more information)? Third, how would one recognize such a function, short of knowing the exact result and the entire series?
My first instinct (full disclosure: I’m not an analyst) is to ask if your ugly function is close to being even. Odd order terms are odd functions, which “pull the ends” in different directions.
On a more general note, if your function is ugly enough how are you justifying its expansion in a Taylor series? You do know that almost all functions (even almost all infinitely differentiable functions) aren’t analytic, right? Consider e[sup]-1/x[sup]2[/sup][/sup], for instance (suitably patched for continuity at the origin, natch.
It’s got both even and odd parts, and how much of each depends on a few adjustable parameters. But I’m not sure that should cause any problem: Even in the case of a completely odd function, you would still be able to do a Taylor series, and you’d just get 0 as the coefficient of each even term.
And while most functions are not analytic, most functions encountered in physics are (well, except for the Heaviside function and the Dirac “function”, but those are both really easy to work with, and you’d never need to Taylor expand them). But in any event, I justify my Taylor expansion by the fact that it works: Within the region of interest, it gives results within about a fraction of a percent of the true value (I tested it at a few isolated points), even with the second-order terms. I’m just wondering how to make it better.
Strictly speaking, by the way, any function which is infinitely differentiable at a point can be Taylor expanded about that point, but if it’s not analytic at that point, like e[sup]-1/x[sup]2[/sup][/sup] at x = 0, (one of my favorite examples, by the way), the radius of convergence might be zero.
No, the radius of convergence of that Taylor series is infinity, since all the derivatives are zero. The Taylor series just doesn’t agree in any open neighborhood, or in other words: the two functions have different germs at zero.
Good luck in finding a more on-point answer. I still think you should post the function itself, if only to scare the math-phobic.
I can see this happening for sufficiently large values of x-x[sub]0[/sub] (the series expansion variable), but I think that if the radius of convergence is nonzero then for x sufficiently close to x[sub]0[/sub] each additional term should improve convergence.
As an example, consider the Taylor series
(1/2) - x + (3/4)x[sup]2[/sup] - (1/2) x[sup]3[/sup] + (3/8) x[sup]4[/sup] + …
i.e., with even-order terms (3/2[sup]n+1[/sup]) x[sup]2n[/sup] (n>0) and odd-order terms -(1/2[sup]n-1[/sup]) x[sup]2n-1[/sup]; this is, FWIW, the Taylor series, about 0, for (1-x)[sup]2[/sup]/(2-x[sup]2[/sup]). This series has been tailored (yuk yuk) so that at x=1 its partial sums are 1/2, -1/2, 1/4, -1/4, 1/8, -1/8, …, clearly converging to 0, but with odd-order partial sums having exactly the same error as the preceding even-order partial sum.
But when x gets sufficiently small, the ratio of successive terms in the series, having a factor of x, will get arbitrarily small, so the series will be dominated by its lowest-order terms and the convergence has to get strictly better with increasing order.