Incredibly useful math concepts that everyone should know how to use

On the other hand, I met somebody who told that he got into geometry precisely because he found it easy to visualize. So YMMV.

This statement is breathtaking to me, especially on a board ostensibly dedicated to well-supported and rational discourse. Maybe you meant “I knew some folks from older generations who had intuitive math skills”?

I take it back, we’re in IMHO. Nevermind.

I do, of course. When I was more active in research, I got some good ideas while trying to fall asleep or walking to my office. I still think about problems I solved long ago, thinking about the solutions.

overall, I am pretty good at math … (donno prob. top 5-10% of the overall population) …

but I cannot for the life of me make divisions on paper (like I learned 45+ years ago) …

stuff like 89,642 / 712 = ???
(and by that I mean I do not recall the algorithm to do it)

but I am pretty good at getting a “close number” of that in my head w/out pen/paper … (I just did the basic 10 sec test and came to “120 to 130” as a result … (my cel. tells me the result is 125.9)

good enough for 95% of all ocurrances in my life - for the remaining 6 % I have a celphone :wink:

Very few people do. You learn that in about third grade, and then your teachers say “And now that we’ve shown that we know how to do that, we can use a calculator”. And then students get to Algebra II and learn how to divide polynomials, which uses the same process but for which there isn’t a button on a calculator, and everyone goes “Wait, I used to know how to do that”.

And yes, it is the same process (if anything, a touch simpler, because you don’t have to worry about borrowing). This is because place-value numbers are polynomials.

The point is that with mathematical skills/concepts you can figure it out.

Do kids still learn this? I did, but I can’t remember the last time I’ve actually done it. Still, I solved it without too much difficulty (125 Remainder 642 — the numbers you chose worked out fairly easily), but I wouldn’t expect the average person to do so.

P & E would be a lot less frustrating if people in general understood probability.

Having been shown how to do it by hand is useful for developing a conceptual understanding that helps you quickly grasp the order of magnitude of a calculation even before reaching for an actual calculator. Trigonometry classes in high school or college do something similar for things like the sine and cosine of an angle.

Several decades ago when slide rules were still in common use, the people who used them regularly got good at estimating results because it was helpful when using a slide rule. A slide rule works by turning a difficult multiplication or division operation into an easy addition or subtraction problem. To do long division, you can take the logarithm (log) of each number, subtract one from the other, and take the inverse log of the resulting number to get your final answer. This is great, because subtraction is much easier to do by hand than long division is. A slide rule is a really handy tool that gives you the logarithm of a number, but the catch is that it can only do this for numbers between 0 and 10. So you move the decimal points on your starting numbers:

89,642 → 8.9642
712 → 7.12

The log of 8.9642 is 0.95251.
The log of 7.12 is 0.85248.
The difference of these two is 0.10003154.
The inverse log of 0.10003154 is 1.259.

To get from 1.259 to the final answer, you need to remember the total number of spaces you moved the decimal points in your starting number (four for the first number, minus two for the second number (minus because you’re doing division instead of multiplication)). Or you can skip that, and just round your starting numbers to the nearest decade (10, 100, 1000, etc.):

89,642 → 100,000
712 → 1,000

Now divide 100,000 by 1,000, which is really easy to do in your head: the answer is 100. Now you know your final answer should be near 100, so you take your slide-rule answer of 1.259, and move the decimal to make your answer near 100:

1.259 → 125.9

Easier than trying to keep track of decimal point movements earlier in the operation.

Being able to round numbers so that you can quickly calculate an estimate in your head is handy - and if you can develop a sense of how big your rounding errors are on your inputs, you can hedge your estimate, which is even more handy. In this case, I can tell you that the denominator (712) moved relatively farther to round up to 1000 than the numerator (89,642) moved to round up to 100,000, so I know my answer will definitely be somewhat greater than 100, definitely not less than 100.

Here is an example: remember that algorithm you may or may not have learned to take square roots via long division? Nobody does, because it’s not that useful. However, given a bit of time you can reconstruct it, assuming you more or less understand how it is supposed to work.

I remember it - it’s fun to play with during boring meetings

I still teach a slide rule lesson to my high schoolers, whenever we do logarithms (algebra 2 and pre-calc), just because it’s so good for illustrating concepts like that.

I never learned that, but I have whiled away time by coming up with algorithms for square roots.

I more or less remember it–or can reproduce it. But here is an interesting story about it.

A number of years ago (probably nearly 40) I bought a Forth (an old computer language) interpreter. It came with a built-in square root program that worked by taking a trial square root, say t (I think they started with t=1, and averaging t with n/t where n is the original number. Note that if t is the square root, then t = n/t. Rinse and repeat until you get t = n/t (or nearly so).

It was very slow since on the 8088 chip division is very slow and each iteration used 2 divisions, one to get n/t and the other to do the average (although an obvious optimization is shift right to divide by 2, so maybe it used only one). But I recalled that seventh grade algorithm. It was basically choose a trial next digit and, if it turns out to be too large reduce it. Well, in binary, the next bit can be only 0 or 1. Always try 1 and if it is turns out to be too large, choose 0 and go to the next stage. It was easy to program and ran rings around the original algorithm. Each step used only a subtraction and a comparison. I sent it off the vendor and I think he used it in his next iteration.

You can improve that algorithm, to only need one division, total (no matter how many iterations you do). First off, keep all of your approximations as rational numbers, numerator over denominator, so you can invert a number just by swapping the two. Second, all the algorithm needs is that you pick a new number between t and n/t, not that it be any particular average, and a quick-and-dirty way to do that with rational numbers is to add both the numerators and denominators.

For instance, for sqrt(2). Start with 1, or 1/1. 2 divided by that is 2/1. So our next approximation is \frac{1+2}{1+1}, or \frac{3}{2}. After that, we want a number between \frac{3}{2} and \frac{4}{3}. One possibility is \frac{3+4}{2+3}, or \frac{7}{5}. Then the next approximation is \frac{17}{12}, then \frac{41}{29}, etc. Once you’ve taken as many iterations as you want, only then do you actually do the division (for instance, \frac{41}{29} \approxeq 1.4138, compared to \sqrt{2}\approxeq 1.4142).

Yes that works, but that is not what the given program did. That would have been a vast improvement, but my point is that the 7th grade algorithm becomes really easy in binary.

In the early 80s I read an article in Popular Science about a similar algorithm for calculating roots of any positive integer degree. Divide x by t to the (i-1) power, add (i-1) times t, and divide by i. So for x=8 and an initial cube root guess of 1, you get (8+2)/3 as the next guess, and then 2.462, then 2.081 and so on to 2

At first, I thought this would work. But on thinking about it I realized that you had to know whether the latest number was above the square root in question or below in order to know which two of the last three to take the Farey average (as it is known) of. This works well with the square root of 2, but consider, say, 5.
The first two numbers will be 1/1 and 5/1 and the third is 6/2. Now we can see that (6/2)^2>5, but if we want to design an automated algorithm to do it, we have to whether 6^2 is less than or greater than 2^2\times 5 and this requires 3 multiplications which is also slow at least on the 8088. If you ignore this your sequence will go
1/1,5/1,6/2,11/3,17/5,28/8,45/13,73/21,… which seems to be heading towards about 3.5. These are actually Fibonacci type sequences and it would not be hard to calculate, but not worth the effort.

You’re not picking two of the last three. You’re picking two of the last two.

At every stage i, you have an ai and a bi, both of which are approximations of sqrt(x). For simplicity, we can take a0 = 1 and b0 = x, if you like (though in practice, you can usually find a better starting approximation). Then, ai+1 is the Farey average of ai and bi, and bi+1 = x/ai+1 . At any given step, we don’t necessarily know which of a or b is larger, but we know that one of them is greater than the true square root and one is less, and that any number in between them is therefore a better approximation.

Using your example of 5:

n	a_i			b_i			a (decimal)
0	1			5			1
1	6/2			10/6		3
2	16/8		40/16		2
3	56/24		120/56		2.33333
4	176/80		400/176		2.2
5	576/256		1280/576	2.25
6	1856/832	4160/1856	2.23077
...
inf							2.23607