Math question on limiting behavior

I promise this is not homework which ended for me 40+ years ago.

F(x) is a strictly increasing, strictly concave, continuous unbounded function. Let’s make it twice differentiable if that helps.

Must F(x) grow like x[sup]a[/sup] in the limit? That is
Is it true that there exists an a such that lim[sub]x->inf[/sub] F(x)x[sup]-a[/sup] = constant > 0?

If not is it true that exists an a such that lim sup[sub]x->inf[/sub] F(x)x[sup]-a[/sup] = constant1 > 0 and
lim inf [sub]x->inf[/sub] F(x)x[sup]-a[/sup] = constant2 > 0?

I don’t see how to get the oscillation needed for the second without violating concavity, but my intuition seems to lead me wrong on these too often.

x is real and F(x) takes on real values only if that matters.

Try F(x)=x^x

If I understand the question right, no. You don’t even need unusual functions, either. Good old e^x eventually grows faster than any x^a.

Terminology question: Do you mean concave upwards (i.e. f’’ > 0) or concave downwards (i.e. f’’ < 0). I’ve seen unmodified “concave” mean either of these.

The answer to your questions is “no” and “no” either way, but the counterexample is different.

The standard terminology among those who are picky is that a function f is convex if f(tx + (1 - t)y) < tf(x) + (1 - t)f(y), and concave if -f is convex. Convexity is a big deal in optimization theory, so this is fast becoming standard terminology otherwise.

Then would x^x and exp(x) then not match the OP’s criterion since they are not strictly concave? Something like ln(x) would fit the bill then.

Yes I meant concave the way optimizers mean it that the midpoint of the line connecting any two points lies below the function f(tx + (1-t)y)) > tf(x) + (1-t)f(y). (And if it matters the function is to be positive only). I’d like to see the counter example to that one, please. Remember f(x) is strictly increasing, strictly concave as just defined, unbounded above, and twice differentiable.

Yeah ln x would be a counter-example, and I meant to exclude it as sort of equivalent to a = 0 in my conjecture since it fits naturally there in the sequence of integrating x[sup]b[/sup] as b -> -1.

Does ln(x) (for x > 1) fit the bill? It’s second deriv is -1/x^2 which is always negative for that domain, but ln(x)/x^a for any positive a has limit zero as x -> inf.

ETA: Never mind then.

What about sqrt((ln(x))? Does that avoid your “fit naturally” objection?

The only strictly positive concave functions are constant. This follows pretty quickly from the fact that a concave function is bounded above by any of its tangent lines.

Anyway, something that grows more slowly than the natural logarithm will qualify. If you restrict the gamma function to the ray [1, \infty) and take the inverse of that, it should work.

Yes but that’s only if you insist on looking over the entire real line.

Yes, thank you. Late today I emailed a math student and he pointed out that
x[sup]c[/sup](log x)[sup]b[/sup] would grow faster than x[sup]c[/sup] and slower than x[sup]c+epsilon[/sup] for any b > 0 and any small epsilon and would fit my increasing concave criteria for all large enough x if 0 <= c < 1.

Bah! If log x counts as x^(-1 + 1) = x^0, then x^c (log x)^b should count as x^c (x^0)^b = x^(c + 0 * b) = x^c.

Related question.

Is there any enumeration on how strictly increasing strictly concave functions can approach infinity? We obviously have x[sup]a[/sup] and x[sup]a[/sup](log x)[sup]b[/sup] for 0 < a < 1. Is that exhaustive? That is, is it true for all continuous, twice (or more times if needed) differentiable, strictly increasing strictly concave unbounded functions that there are some a and b such that lim f(x)/[x[sup]a[/sup](log x)[sup]b[/sup]] -> c > 0 or at least lim sup f(x)/[x[sup]a[/sup](log x)[sup]b[/sup]] -> c > 0?

Or is the set of limiting behaviors too big to try to even classify?

log(log(x))?

Yeah, you can always make one that grows more slowly. The inverse gamma function I mentioned above definitely falls outside that range.

Consider two functions f and g which are like so: at the beginning, f is twice as big as g, but g is growing faster than f. They continue at basically this same growth rate until g becomes twice as big as f. At this point, g decreases its growth rate to be less than f’s, and they continue at (basically) this pace until f is twice as big as g. At this point, f decreases its growth rate to be less than g’s, and they continue at this pace until g is twice as big as f… Ad infinitum.

These are strictly increasing, concave functions, and can just as well be arranged to be infinitely differentiable (just swing some bump functions round the transitions as necessary) and unbounded (just never drop the growth rate below some particular positive value). But, of course, they have no limiting ratio; their ratio oscillates between 2 : 1 and 1 : 2.

Accordingly, limiting ratios won’t pick one or the other out as larger/faster-growing in the long run. And, thus, there can be no hope to assign both a limiting ratio-based order of growth within some hierarchy which is totally ordered by such limiting ratios (such as x[sup]a[/sup] or x[sup]a[/sup] (log x)[sup]b[/sup] or other such things)…

This doesn’t quite address the lim sup question, but I think deals a fair amount of damage to the plain limit one.