Okay, once that gets old and boring, next you can work out what you get when you take the differences of five successive fourth-powers, four times in succession.
Then, on a dreary long and hot summers night, see what you get when you take the differences of six successive fifth-powers, five times in succession.
(Hint: If you’ve taken about 3 weeks of 1st semester calculus, the above demonstrations are so trivial that they aren’t even a “problem” to be solved any more. If you haven’t taken any calculus, then Congratulations! You’re well on your way to discovering calculus for yourself just as Newton and Leibniz did.)
(ETA: Or, what Indistinguishable just said, except if you don’t already know some calculus, then you probably didn’t understand a word of it. )
Not an answer to the OP but in Physics at UMich in the 70’s calculators weren’t allowed, and they taught us a method based on algebraic binomials.
(a + b)^2 = a^2 + 2ab + b^2
Start with an estimate for the sqrt and use that for a. If your estimate is in the ball park, b^2 is vanishingly small and can be ignored. Remove it and solve for b, and add that to your estimate. Usually good for 3 places on first try, and you can iterate for higher precision. You can also estimate the precision as b^2.
> When I was at school we had books of tables for all kind of calculations.
> Logarithms for example, which someone had tediously worked out for us so that
> we didn’t have to.
Senegoid writes:
> . . . the STEM masters of the late Middle Ages who published those massive tables
> of trig functions . . .
Until the 1950’s, when computers (the machines, that is) became common enough that they could be used for creating all the many tables of mathematical functions (logarithmic, trigonometric, and a variety of others you may not have heard of), these tables were created by computers (the people, that is). (This probably also is true about scientific and business tables, but I’m a mathematician and don’t know much about them.) That’s right, the term “computers” used to refer to a profession. These were people who would spend their careers sitting in a room, often with others of their profession, doing mathematical calculations all day. The results of these calculations would then be compiled into tables.
During the first half of the twentieth century, computers tended to be women. They were often the sort that today would be going to college to get undergraduate and graduate degrees in STEM fields. In those days they were often discouraged from getting such degrees and told to take jobs as computers. Their employers could pretend that these women were just doing secretarial jobs even though these jobs needed people with enough mathematical skill to understand the ideas of their calculations.
I am not sure that’s true. They would have been doing calculations without necessarily understanding the mathematics. The young women who were recruited to work at Bletchley Park (women because the young men were in the forces) tended to be graduates or at least in higher education. If there was a class of female workers - computer - already skilled at dealing with arcane calculations, it would have been these who were recruited. I think that the job of ‘computer’ was the equivalent of ‘computer operator’ today.
One of my early jobs in the early 60s was in the 3M factory in S Wales. I had to record raw material usage on a set of cards (simple arithmetic) and my calculations were checked by a young lady using a ‘comptometer’. This was a complex adding machine and deeded training to use it but the operator had no more idea of how it worked than I did.
> A Computer Wanted. […] The examination will include the subjects of algebra,
> geometry, trigonometry, and astronomy.
There were a variety of such jobs. The one you’re talking about was someone dealing with business calculations which required understanding of arithmetic. Other such jobs required knowledge of more difficult mathematics.
It seems to me the he wouldn’t actually “extract” square roots in the sense that we think of it. We are accustomed to punching into a calculator “2” then “SQRT” and getting 1.4142135623 but Archimedes probably would have just said (7/5)^2 = 49/25 which is clearly less than 2 and (10/7)^2 is 100/49 which is clearly more than 2, therefore the square root of 2 is between 7/5 and 10/7. He would have looked at tables of square numbers and try to find pair where one of them is almost exactly twice the other.
The cattle problem also has a nice relationship with (the misnamed) “Pell’s Equation”, and thus with approximation of square roots. That having been said, it’s unclear Archimedes actually had anything to do with the cattle problem.
While the second-smallest solution to the cattle problem is indeed impressively large, that link leaves off the smallest solution, which is much more accessible. No bull.
You can also calculate square roots (or for that matter any other root) by using the binomial series (also discovered by Newton, IIRC). You can go as many decimal places as you want by calculating successive terms of the series.
Yup, although this is again slower than the Newton/Babylonian method; the number of accurate digits of the approximation is asymptotically linear in the number of terms of the series used, as opposed to the exponential convergence of the Newton/Babylonian method.
Sorry, I read Chronos much too quickly. So I have gone back to look. The trouble is that this method converges much too slowly. The real problem is that this “mediant” (a new word for me) is not defined on rational numbers, but on fractions. To explain the difference, let me use # to denote this mediant. Using Chronos’s example of 10 (although for larger numbers that are further from a perfect square, it would get worse). After 4 approximations, Chronos was still getting it between 3.13 and 3.18 (the actual answer being 3.162…). Now he uses 3/1 # 10/3 = 13/4 = 3.25. But suppose I tried 9/3 # 10/3 = 19/6 = 3.167. Already much better. Now if we use the mediant again we get 19/6 # 60/19 = 76/25 = 3.16. But 57/18 # 60/19 = 117/37 = 3.162, correct to three decimal places. The trouble with the mediant is that it depends on the sizes of the denominators and gives a poor mean if they are disparate. At this point you may as well use Newton’s method even if it does involve a multiplication.
However, all of this is really beside the point. Classically, I think there were no Roman numerals above C (D and M came in the middle ages, IIRC). I know there are ways of multiplying them and perhaps even dividing (although I cannot imagine how). But the Greek numering system was even more primitive than the Roman. And remember he had to do 5 five nested square roots. My mind boggles.
Could they have had some sort of primitive abacus? Note that something like decimal notation is implicit in an abacus. I believe that “abacus” is comes from the Latin word for table. Some sort of counting table maybe?
You are right that Chronos’s method converges slowly. However, to your point about the mediant, we should note that Chronos’s method can be interpreted as using the mediant in a way which is well-defined regardless of choice of fractional representation.
That is, we can think of Chronos’s method as, on each iteration, replacing approximation n/d to sqrt® with the better approximation n/d # rd/n = (n + rd)/(n + d).
But this is simply the process of iterating the function x |-> (x + r)/(x + 1). Note that this function is well-defined regardless of representation of x.
That this iteration converges linearly is because this function has a nonzero derivative at x = sqrt® (so that the error term approximately multiplies by this derivative on each iteration). Newton’s method converges exponentially instead because it iterates the function x |-> (x + r/x)/2, whose derivative at x = sqrt® is zero (so that the leading term in the function’s Taylor series around this point is quadratic, and thus the error term is replaced by a value approximately proportional to its square on each iteration).
Let us also note that if you are willing to use multiplications, Chronos’s method can be sped up considerably (while producing the same approximations): we may think of Chronos’s method applied to starting approximation n/d as producing n’/d’, where n’ + Jd’ is the result of repeatedly multiplying n + Jd by 1 + J, taking J as a new constant introduced to satisfy the identity J[sup]2[/sup] = r (much like the complex numbers are introduced with new constant i satisfying i[sup]2[/sup] = -1).
Naively, we perform m iterations of this by simply multiplying by 1 + J anew m many times, taking time proportional to m; however, more efficiently, we may precompute (1 + J)[sup]m[/sup] using “repeated squaring” in time proportional to log(m). This then recovers an exponential speed of convergence.
That is, we may skip quickly along the series of approximations of Chronos’s method, starting, say, from 1/1, by repeatedly replacing n/d with (n[sup]2[/sup] + rd[sup]2[/sup])/(2nd) [by consideration of the square of n + Jd], and in so doing, will exhibit exponential convergence (though this does require we use multiplication). But note that this simply means repeatedly replacing x with (x + r/x)/2, and thus we see that this is the same as Newton’s method as well!
Just wondering, but this guy is a brit. Do Brits use different terminology to explain mathematics?
For example, in section 2, “Brute Force”, the first sentence says "The brute force method Involves calculating m-squared and 3n-squared For all m, n up to 1351, Then finding m,n such that m-squared - 3n-squared is very small. Ok so far, but when you look at the table At the bottom of that page (table 1),
He has 120-squared = 14400 + 241. And to the right, 3*120-squared=43200 + 723
I interpret this as he is using 120 for both n AND m. Am I correct, or did I read this wrong?
ETA apologies for not knowing how to put a superscript or subscript into the post.
I see now that I was beaten to the punch on this by Thudlow Boink back in post #4. But re: the speculative historical question of what method Archimedes actually used for his famous works, you should take a look at this, if you haven’t yet.
Oh, this is really clever (this isn’t the first time you’ve used the “introduce a new abstract constant” trick, but I think I’m finally grokking the approach now).
Did we just get lucky that J[sup]2[/sup] equals r? Well, obviously you chose it to be that, but what I mean is this:
Our original iteration is this:
n/d |-> (n + rd)/(n + d)
Which is just:
n/d |-> (n/d + r)/(n/d + 1)
And then:
x |-> (x + r)/(x + 1)
If we want to treat n/d as a single variable, we introduce J (which I’ll leave completely undefined for now) and represent our number as (n + Jd). Therefore:
n + Jd |-> n + rd + Jn + Jd
If we try to arbitrarily collect terms, we see that there’s a (n + Jn) in there, and since our original number has an n term, it makes sense to choose (1 + J) as the multiplier. Let’s see how that goes:
(n + Jd)(1 + J) = n + Jn + Jd + J[sup]2[/sup]d
And what we wanted to see was:
n + rd + Jn + Jd
From this, we can see that the unmatched terms are J[sup]2[/sup]d and rd. So we define J[sup]2[/sup] = r and we’re done.
That seems awfully convenient. Is it just luck that it turned out so cleanly? I suppose it depends on the (1 + J) multiplier, but the way I see it, it came about because it was the obvious choice, not because it would optimize the final answer.
I wouldn’t say it’s just luck, but I’ll have to take some think about why I wouldn’t say that and I what I would say instead.
[Let me make one other neat observation in the meanwhile: if x and y are (positive) approximations of sqrt® with m and n digits of accuracy beyond the decimal point, respectively, then we can combine them into the even better approximation (xy + r)/(x + y), with (asymptotically) about m + n digits of accuracy (actually, we should expect even 1 bit of accuracy beyond this, but nevermind that).
Chronos’s method amounts to repeatedly combining approximations in this way with 1, thus gaining a constant number of new accurate digits with each iteration.
Newton’s method amounts to repeatedly combining approximations in this way with themselves, thus doubling the number of accurate digits with each iteration.]