Well, this method gives you the square root of two, if x = 1. But you’re right, for arbitrary square roots you need a more general construction.
For example: suppose AB is a line segment of length n. Extend AB to a line segment AC with length n+1. Construct a circle with diameter AC, and let P be one of the points where the perpendicular to AC at B intersects the circle. Then the length of PB will be exactly the square root of n.
I guess rolleyes were on sale and you got quite a few you can give away. Still you have not answered my question of how to program an algorithm which is simpler and more efficient than the one I proposed. Please post one that is simpler and will converge in fewer steps. Thank you.
Another point I do not quite understand. “Guessing a number M” or doing M=(1 + n/2) or whatever method you use to get a first M serves to give you the first pair to work with, the pair being M and M/N. The next step is to make M=(M+M/N)/2 and reiterate. If you "right shift the exponent field"as you propose, you are effectively dividing by 2 so that your first M is N/2 and your next step is to make M= (M + M/2)/2 which always equals !.5M = 3N which can be got without so much hassle. Your next step will yield M= (N/2 + 3*N)/2 etc. Unless I am missing something this does not seem to converge. Can I get an explanation of how this works? (preferably without a rolleyes)
I have implemented both algorithms, strating out with M=N/2 and M=(1+N/2) and they both converge to six figures in about the same number of steps although sometimes M=(1+N/2) will do it in one or two less but I guess that is not a huge difference. I had always used M=(1+N/2) because that is what I was taught somewhere and never really thought about it. I still don’t get the rolleyes though. Maybe someone else can explain it to me?
S’okay. If you have the ability to just right shift the exponent (of, say, and IEEE 754 single precision float) then I don’t think you’d bother with Newton’s method, anyway.
Engywook :smack: Sorry, my brain is going, as others have noticed. I was groping for the construction described by Orbifold, which is Proposition 13 from Book VI of Euclid’s Elements.
I suspect you may be thinking of this method. However, it predates Newton and is, I suspect, also ancient. As may be the equivalent method for cubics roots, which Fibonacci published in 1202. There’s also a generalisation to arbitrary roots given by Stifel in 1544 - see A.W.F. Edwards, Pascal’s Arithmetical Triangle (1987; Johns Hopkins, 2002, 5-7).
Aside from Newton-Raphson, the other method Newton invented for extracting roots was the binomial expansion, which can express roots as the sum of an infinite series. You approximate by truncating the series at some finite number of terms.
This page has more information on classical methods for the extraction of roots.
More on sources here at Big Square Roots; it mentions evidence that the ‘long division’ method goes back at least to the 13th century Arab mathematician Ibn al-Banna.
This problem has to be addressed whenever you’re porting a compiler to a machine that doesn’t have hardware floating point support.
And Newton’s method is a very easy way to do it.
The implementation depends on what floating point representation you’re using, of course.
The basic steps are:
separate the mantissa and exponent.
the mantissa will be in a narrow range, e.g. 0.5 to 1, so use a linear interpolation to guess the mantissa of the root. e.g. root mantissa = 0.42 + 0.59 * mantissa.
right shift the exponent, and take note of whether it’s odd or even.
reassemble the root’s mantissa and exponent, and if the exponent was odd, multiply the result by 1.4142.
you now have a root accurate to about 3 decimal places. Iterate through Newton’s method 3 times, and that’s it.