We had an example problem in my Graphics course today, it’s just a standard “rotate something around an arbitrary line through (0,0,0) and (a,b,0).” The insight is that there’s some angle -beta that you can rotate the line by to get it on the x-axis, and then you can rotate it about the x-axis. Either way, that’s not the point, the idea is that beta = tan[sup]-1/sup, or the equivalent for cosine and sine. Then you can get a rotation matrix that needs the elements cos(-beta), sin(-beta), -sin(-beta), cos(-beta). He asserted that since you know a and b you can make the following substitution instead of taking time to compute beta:
cos(beta) = a/sqrt(a[sup]2[/sup] + b[sup]2[/sup])
sin(beta) = b/sqrt(a[sup]2[/sup] + b[sup]2[/sup])
My question is this, from previous threads here and searching elsewhere, I think the trig functions and their inverses generally use a lookup table. So one division and four (assuming naive computation) trig lookups would likely be cheaper than four divisions, four square roots, eight squares, and four additions (assuming naive computations).
I asked and he said he researches and has a degree in computational geometry, which intersects with graphics, but since he doesn’t focus on graphics he’s not really overly knowledgeable about the intricacies of implementation details as far as graphics processing goes (so we focus more on theory). And that this suggestion just comes from a book he read. Which is fine, but I want to know now :p.
I understand it probably depends on the language and compiler in question (whether it does taylor series expansions vs lookups for trig functions), but in general, or if you want to get specific, let’s say gcc with the standard issue Ubuntu Linux <math.h> functions, what would likely be cheaper?