For what it’s worth, I was assuming the question was “How are the lookup tables constructed in the first place?”. (Of course, one theoretical way of constructing them is to say “Can this be the base 2 log of 7? No, raising 2 to it shows it’s too low. Can this be the base 2 log of 7? No, raising 2 to it shows it’s too high” where the process of raising 2 to a rational power, in turn, involves finding Nth roots of 2 by “Can this be the Nth root of 2? No, taking its Nth power shows it’s too low. Can this be the Nth root of 2? No, taking its Nth root shows it’s too high”. But I assume the question is for a method of computing the lookup tables that is not quite so gorgeously inefficient)
That’s what I thought it was too, and I was incredibly impressed with your answer (mainly because I don’t understand it).
I guess you’d call that unit a “sageratian”, then?
I’d just think of it as using units of a full revolution (perhaps the most natural unit for specifying angles…), but encoded via a fixed point datatype with 9 bits of precision past the binary point.
I could try to explain it better, if you’d like. What parts do you think I should clarify?
As for computing them more efficiently, somewhere upstream somebody mentioned Chebyshev. More specifically, you will find a topic called something “economization of power series” studied in numerical analysis courses, which makes use of the Chebyshev polynomials.
One of the problems with the usual power series representations of a function is the distribution of error over the range of x values you will be using. A truncated Taylor series expansion of a function f(x) will be exactly correct at f(a), the point you expanded around. As you move away from that point the error increases. To get desired accuracy for f(x) at the points of your desired range, you may have to use a large number of terms to accommodate the x values furthest from a. The economization techniques allow you to jigger the coefficients of the polynomial obtained from the truncated Taylor series in a way that distributes the error more evenly. This allows acceptable accuracy with a smaller number of terms.
With large memory spaces, fast processors, and the ability to copy a lookup table from somewhere else, economization of series expansions isn’t as important as it once was. At one time it was very important.
Thank you. I would have to spend some time trying to figure it out first. I’m getting an idea of how it operates, but I need to go back and look at the definitions of these functions in the first place. I’d like to take a rain check on the offer though.
On the other hand, one of my colleagues was recently working on a project which required greater than double precision, and found the existing math libraries for such lacking, and so had to write his own trig functions. IIRC, he ended up using Chebyshev polynomials.
The basic formulas are:
ln(z) = (z-1)- ((z-1)^2)/2+((z-1)^3)/3- … Alternate plus and minus,incresing by 1 each time.
Similarly:
sin(x) = x - x^3/3!+x^5/5!-x^7/7! etc.
cos(x)=1-x^2/2!+x^4/4!-x^6/6! etc.
If you are dividing by factorials, and x (radians) is small, then these series will converge pretty fast. The question is whether it is easier and more accurate than a lookup - maybe lookups could take too much memory in the good Olde Days, but today memory is cheap. OTOH, since the 80486, Intel chips have had a separate floating point math coprocessor to handle this sort of calculation. However, if you start to get into extended precision, beyond 8 digits, calculation is probably still your best bet.
Small typo: It is constructed to have derivative i times itself and have initial value 1.
The ttmath library can handle at least 2048-bit floats. And it has the trig functions built in.
I wonder what was insufficient about it.
A long time ago, I had to look into whether we could speed up calculations of Bessel functions. We were using the Numerical Recipes routines, which use rational function approximations. Rational functions are ratios of polynomials, P(x)/Q(x) where P and Q are polynomials. For the Bessel functions, the argument range was 0 to 8, with a different approximation used for larger arguments.
Those were significantly faster than Chebychev polynomial approximations for the same accuracy. The Numerical Recipes functions were only single precision, but if you went to their source, an ancient tome called Computer Approximations, they had coefficients for lots of rational function approximations with P(x) and Q(x) of different orders, and the accuracy using each of those. IIRC, some of them had accuracies in the low 20s of digits. That same source had approximations for other functions, probably including Sine and Cosine, but of course I never bothered with those.