# Why does this mathematical concept work?

While at school a few weeks ago, I learned an interesting way to multiply:
Take the numbers to multiply, double the first, and half the second (round down). Repeat this until the second number equals one. Then, add up all the first numbers (and only the first numbers!) where the second number is ODD. The sum is the product of the two numbers.
This probably doesn’t make sense, so I’ll demonstrate it with 27 * 19:
27______19
54______9
108_____4
216_____2
432_____1
Cross out 108 and 216, because the second number is even. That leaves 27, 54, and 422.
27 + 54 + 432 = 513 27 * 19 = 513
(Check it on your calculator if you don’t believe me! Check it with other numbers, too!)

So: WHY does it work? Is there a mathematical proof to show that this works on all numbers? When did this first get found out?

When I was taught this method at school it was known as Russian multiplication. This link describes how it works.

You are, in a way, converting one of the numbers to binary (base two), and doing the math that way.

First, the right hand side: halving a number and keeping track of the remainder (which happens when the number is odd) is a method of converting to binary.

19 odd
9 odd
4 even
2 even
1 odd

or “odd,odd,even,even,odd”, which is 11001 binary, which is 19 decimal.

Now, you are taking 27 (decimal) times 11001 (binary), where each “1” in the binary means 2^(some power) times the decimal number.

The far right 1 is just 1 * 27
the next 0 is the 27 * 2 you ignore
same for the next 0.
The next “1” is 2^3, or 27 * 8, which you add,
and the last “1” is 27 * 16.

27(d) * 11001(b) = 27 * 2^4 + 27 * 2^3 + 27 * 2^0,
which is in essence what you do when you choose which numbers on the left to add up - based on the binary digit, ie the remainder, ie whether the other number was odd/even.

Argh. Excuse me - typed too fast. Everywhere I have “11001”, should be “10011”, but the theory is correct.

At its simplest level, you’re just dealing with a column of equivalent equations. Consider this example:

14 - 32
28 - 16
56 - 8
112 - 4
224 - 2
448 - 1

The “trick” is that 14x32 = 28x16 = 56x8 = 112x4 = 224x2 = 448x1

The problem is that not all equations contain a number that is a power of two. So the part about the odds and evens accounts for the rounding off.

Years ago, my college history prof explained this method in class and claimed that some ancient people (I forget who, but perhaps the Egyptians) used the method. He then asked if anyone understood why it worked. I immediately realized it was as NE Texan describes above. Unfortunately for him, I couldn’t explain it in the limited time we had in class so I didn’t even try.

This, by the way, is the method that computers use for multiplication. Doubling a binary number is easy, you just shift all the bits one place leftand put a zero on the end.

Correction: it is a method that computers use for multiplication. There are a lot of algorithms for multiplication out there. The “grade school but in binary” method is O(n[sup]2[/sup]). A nice divide-and-conquer approach brings the exponent down to log[sub]2[/sub] 3. But Schonage-Strassen is O(n log n log log n). If you are doing multiplication on hundreds of bits (as in RSA) you definitely don’t want to do the grade school method.

Some devices also go the other way. On primitive calculators, multiplication was done using a (relatively) very slow repeated addition method. But it was still so fast that the answer appeared before your finger left the “x” button!

Note also that addition in computer hardware is most commonly done using “carry lookahead addition” which is much faster than grade school addition. And if the multiplication method using that type of addition, it won’t appear similar at all to grade school multiplication.

And then there are Wallace Tree Multipliers …

Details! No, seriously, this is pretty interesting, except that I don’t understand it one bit. What does the ‘O’ represent? And what’s carry lookahead? And can I plant a Wallace Tree in my backyard?

Saying that f(n) is O(g(n)) means that there are two constants, N and c, such that whenever n > N, f(n) < cg(n). In plain English, it means that f doesn’t grow any faster than g as n gets large.

Wikipedia’s entry on the Carry Loakahead Adder isn’t great, but it’ll give you the basic idea.

Big O puts an upper limit on the growth rate of a function, as ultrafilter notes. Big Omega places a lower limit on it’s growth and if O and Omega of a function are the same, then it’s in Big Theta.

Basically, they’re sets that are useful when analysing algorithm performance in computer science / software engineering. Generally, an algorithm in O(n^3) will perform some the task it is designed for slower than one in O(n) (not always true, though).