The slashdot poll today was on whether people think P==NP, and someone rightly brought up that, even if P==NP, it’s quite possible that the poly-time algorithm for NP problems will be slow enough that it won’t really make much of a difference. If we discover an O(n[sup]100[/sup]) algorithm that solves NP-complete problems, it won’t make much of a difference as far as practical computation goes.
But that made me wonder: What is the highest-order polynomial algorithm that is the most efficient solution to an interesting problem? In school, I only remember learning about algorithms that were n[sup]3[/sup] at worst, and the vast majority of the algirhtms we learned were n[sup]2[/sup] or better. I would define interesting in this case to mean having a practical application, not just mathematical interest. I realize that “purely” mathematical problems often do end up having a practical application, but I’m hoping to avoid the trivial answer of “standard algorithm applied in a k-dimensional space” which (may) have an O(n[sup]k[/sup]) running time.
It depends on the problem. n[sup]2[/sup] is the runtime of an inefficient sorting algorithm, but n[sup]2.71[/sup] (or whatever it is) is the runtime of an efficient matrix multiplication algorithm.
Actually, n[sup]2[/sup] is the runtime for the kinds of sorting algorithms that most people can come up with. It takes a real computer scientist to invent a really inefficient sorting algorithm:
Many NP-complete problems are solvable in poly time when a key parameter is fixed. But that parameter then appears in the exponent of the running time. Take determining whether a graph can be embedded on a surface of a given fixed genus g (how many “doughnut holes” it has). There’s a classic linear time algorithm for the plane. Keep increasing g and the exponent in the running time keeps going up. The running time of the Miller, Filotti, and Reif algorithm is O(v[sup]0(g)[/sup]) for a graph with v nodes.
There are also parameterized versions of many NP-hard optimization problems. E.g., if you want to get within X% of optimum for bin-packing, the running time will have an exponent that is inversely related to X. I.e., the better answer you want, the more running time is needed. (Hardly a shock.) A little bit of info here.
(Many, many NP-complete problems cannot be solved like this though. E.g., coloring a graph and finding a longest non-intersecting path.)
In short, there is no “practical” limit on the exponent for poly time algorithms.
As to specific non-NP related problems, there have been some really fair sized exponents for problems like max flow that eventually got worked down to something more manageable.
Note that all this applies to algorithms and not to problems. There are virtually no tight non-trivial lower bounds for problems.
Go get some recent STOC and FOCS proceedings and I’m sure you’ll see some new algorithms that have nice sized exponents.
This seems counterintuitive to me… I would expect that for any number of nodes v, there is some g such that a surface of genus g can accomodate all graphs with v nodes, and I would expect the bounds for the required g to be calculable. In fact, I can trivially embed a complete graph of v vertices on a surface of genus g = v*(v-1). So for any g > v*(v-1), the answer “yes” can be given almost instantly.
That’s consistent with the O notation given. Remember that saying that f is O(g) just means that there’s some constants N and c such that f(n) < cg(n) when n > N. As long as k > 0, 1 is O(n[sup]k[/sup]).
Chronos: Note that I said “of a given fixed genus g”. g is a input to the problem.
E.g., you can always color a graph with v nodes with v colors. The NP-complete question is “Can you color this graph with this number of colors?” Two inputs: graph and number of colors.
To Chronos’s point, for sufficiently high values of the input g, the answer can be given much more quickly than the asymptotic running time indicates. But since O allows for very loose upper bounds, it’s not a problem.
I think I see my mistake… O only cares about the limit for very large problems. And for v larger than sqrt(g), the trivialization I noted doesn’t apply, and it’s the large v that are important (what v constitutes “large” depends, of course, on the given value of g). So even though, for large g, there’s an awful lot of easy values of v, the difficulty could sharply increase past some threshhold, thus making the problem, for large g, asymptotically “hard”.
I misspoke: Genus g is not an actual imput. It is a specification of the problem. So a program to solve it would have g hardcoded in it. Only the graph is the input. But a program with a small g will run faster than a program with a large g.