Why is arctangent sometimes the only function available for the inverse trigonometric functions? I know you can manipulate arctan to other inverse trig functions, but is there any specific reason why arctan is used over the others? Does it converge faster?
What is the fastest known formula that converges to Pi (e.g. pi/4 = arctan(1/2) + arctan(1/3)?
It doesn’t converge faster, but in a sense it’s the fundamental inverse trig function, as it comes out from a very simple form - the series expansion of INTEGRAL (1/(1+x^2)) dx.
I don’t know. However, some of the faster-converging ones that have been used historically for pi calculations include:
pi/4 = 4*atan(1/5) - atan(1/239)
pi/4 = atan(1/2) + atan(1/5) + atan(1/8)
pi/4 = 22atan(1/28) + 2atan(1/443) - 5atan(1/1393) - 10atan(1/11018)
I’ve never seen a situation where it was the only one available, but if it is, it’s probably because it’s a total function (i.e., arctan(x) is defined for every real number x).
pi = pi is pretty good, but not necessarily useful. Don’t know more off the top of my head.
Actually, reading the original question again, I assumed that agiantdwarf meant fastest-converging only in the context of arctan relations. However, assuming we’re talking fastest-converging of any form, probably it’d be formulae of the Chudnovsky / Borwein type. See the pi formulas page at MathWorld.
Those have linear convergence (i.e. n digits are added at each iteration).
Quadratically convergent (i.e. number of digits doubles at each iteration) algorithm here:
Another rapidly converging algorithm here:
That second algorithm is pretty cool. The MacLaurin series for atan(x) converges more rapidly as x decreases. The algorithm uses the half-angle relations:
tan(x/2) = sqrt((1 - cos(x))/(1 + cos(x)))
cos(x/2) = sqrt((1 + cos(x))/2)
to find a known value of tan(x), where x can be made arbitrarily small (halving at each step). It then does a few MacLaurin iterations to give pi (or rather, pi/n).
True. I must be more rigorous! The “fastest known formula” comes down to a mix of convergence speed and computational speed. While some formulae converge way faster than linearly, the computational overloads at each step mean they’re not very efficient in practice.
For instance, my favourite for sheer simplicity is the Beeler:
f_n = f_(n-1) + sin (f_(n-1))
Cubic convergence - but applying a sine function at each step soon gets out of hand.
On balance, for convergence plus computability, I think currently the Chudnovsky-style algorithms win out.