Is there a limit to how computational expensive a function can be?

Let’s get really simple. Just boolean formulas. And, Or, Not. In conjuctive normal form if you want to keep the formulas simple.

Can’t get much simpler than that. No loops, no big time arithmetic or anything. n inputs, one output.

Given a formula written as a single line program, how hard is it to see if there is a simpler program to compute the same formula? That’s NP hard.

Note that the current known bounds blow up very quickly. Even small n like 100 will take effectively forever using the best known methods.

Is this simple enough?

>I’d be very surprised if division were implemented with Newton-Raphson in the Pentium FPU…

“Good ole’ Cray-1 used an iterative process for floating point division
which worked like this: given a floating point number x, use the first 8
bits of the mantissa to index into a lookup table containing initial
guesses, then do a few steps of Newton-Raphson iteration involving only
multiply-add operations to get the fully converged reciprocal mantissa,
fix the exponent, thus obtaining 1/x, then multiply y*(1/x) to get y/x.
As I recall, the famous Pentium FDIV bug involved some corner cases in a
similar iterative process, all of which is internal to the floating
point unit.”
from http://www.beowulf.org/archive/2004-December/011434.html