Was Newton the first (only?) to develop a method for finding square roots? What about cube roots and other roots? For those unfamiliar with the technique for finding square roots by hand, it resembles a long division problem, but the technique is not quite identical, of course. - Jinx
I’m sure this isn’t what you’re talking about, but you can use Newton’s Method to get the root to any old equation, including x[sup]2[/sup] - 5 = 0. I understand one of the ancient mathematical cultures, like the Babylonians, used what was effectively Newton’s Method for finding square roots.
And the Ancient Greeks used various geometrical methods to find square roots. For instance, if you have a line, you can easily construct the square of which that line is the diagonal.
Do you mean the Newton-Raphson iteration method?
Paradigm, Jinx means this process:
To find a = √b
a) Guess a.
b) Set a = (a + b/a)/2
c) Repeat step (b) until solved.
“Solved” usually means that the last iteration didn’t change the value of a at the precision being used.
Those are the steps involved in using NR to solve the problem. But that particular procedure for that particular problem predated calculus by millenia.
Yep that’s what I thought (he never mentioned Raphson, but I have heard the NR method called Newton’s method, so I was just being sure), in which case the root you find is not actually the real root, just an accurate estimate (how accurate depends on how many iterations of that you choose). So in effect it’s not exactly ‘solved’ mathematically, but for most practical purposes it is.
Oh, also it doesn’t always work…
Try doing it with y=x³+9-9x² And take your first guess to be either 1 or 4.7034
I’ll admit it, I intentionally made that so it would fail, but i’m sure there are other examples where it wouldn’t. But it works fine if you just take a different initial starting point.
Just for information, NR works by taking the point on the line and drawing its gradient to converge to the root, so you have to be careful to not miss any roots.
Btw Desmostylus, your method only works for x^1/2 because you’ve effectively fiddled it.
The real NR method is:
For y=f(x)
Guess a
a-(f(a)/f’(a))
Repeat
Yours works because d/dx(x^2) is (1/2)x^-1/2
Paradigm, I explained a simple point to you, i.e.:
“Do you mean the Newton-Raphson iteration method?”
You are a new member. Welcome aboard. You shouldn’t assume from the outset that you know more than any other given member. This sort of stuff:
doesn’t fly here.
Sorry man, I didn’t mean any disrespect
All I meant was it was different to the version I was familiar with, and your method only worked for x^1/2 because it was just a manipulation of that method (which came first I honestly don’t know).
All I meant by ‘fiddle’ is a manipulation of something that makes it work in specific circumstances.
I don’t assume (to assume makes an ass out of u and me ;)) to be smarter than anyone, intelligence is an abstract concept, of course. Plus i’m only 18 and i’m sure there are many more experienced, wiser, and simply more intelligent people out there
Maybe I should put a disclaimer at the bottom of my posts: ‘This post only opinion of poster, may not represent real fact’
Also, thx for the welcome, I am indeed new…
Guys, I’m not certain Jinx means what you think he means:
I’ve seen a method for computing square roots by hand which resembles long division, in that the digits of the square root are determined one at a time. It’s similar but not quite the same as Newton’s method of applying a->(a+b/a)/2 repeatedly. For one thing, the “long-division-like” method approaches the square root exclusively from below (just as long division approaches the quotient from below), whereas Newton’s method approaches the square root from both sides.
Unfortunately I don’t know the origin of the “long-division-like” method, but I wouldn’t be surprised to learn that it predates Newton. (I’ve never heard it referred to as “Newton’s method”, either.)
I do not think he does. Finding the square root by iterative successive approximations is not the same as the method which “resembles a long division problem”. They are two different methods.
BTW, in my implementation of the recursive algorithm I have never heard or used the “guess a number” to begin with. I used (1+ N/2) or some such.
I’ve a feeling I’ll be smacking my head in a mere moments, but how does this method give you the square root of anything?
Let y be length of diagonal.
Let x be length of side of square.
2x[sup]2[/sup] = y[sup]2[/sup]
From this, I don’t see a way to get to the square root of either x or y being equal to anything but something times the square root of y or x.
And what exactly would you call your starting point? Estimate? Starting Point, maybe. Guesstimate? :rolleyes:
The square root method resembling long division can be found here.
Newton Cracks Roots, and I Don’t Care,
Newton Cracks Roots, and I Don’t Care,
Newton Cracks Roots, and I Don’t Care,
I Can Solve Iteratively.
Yes, that is correct, Oribifold. If you want the square root of 18, for example, your first number in the solution will be the root of the closest square…I forget the exact method after this obvious step. I’ll see if I can locate it in a reference book. In any sense, I was taught it was developed by Newton.
FYI: No, this is not the same as the Newton-Raphson method of iterating to find roots to solve polynomials, but one might stem from the other, possibly?
I’ll look further into it, and post again ASAP…
- Jinx
I am not sure the meaning or purpose of your rolleyes. maybe my meaning wasn’t clear. i emant to say that in a practical implementation if the algorithm in a computer program “guess a number” (in other words, get a random number) is not the easiest or most effective way to do it and starting out doing it the way I said is easier and generally more effective. Does this deserve a rolleyes?
Newton’s method converges quadratically provided that the starting point (“guess”) is sufficiently close to the solution.
You inferred that “guess” meant “random number”.
I still don’t get the rolleyes but anyway, how do you implement in a real computer programing language the instruction “make a guess that is sufficiently close to the solution” in a manner that is simpler an more effective than what I proposed?
Right shift the exponent field.
“sufficiently close to the solution” has a precise meaning in terms of the convergence of Newton’s method. Look it up. :rolleyes: