Maths tricks: how were roots, logs etc. worked out before calculators?

I just pulled my 25th edition CRC Math Tables book off my shelf (I guess nivlac is even older than I am). The common log table is only 16 pages. Still, I wouldn’t want to calculate those 16 pages of figures by hand.

Any particular reason you want him to hate math. Give him some good interest rate problems. He’ll get to use plenty of exponentials and logs and he’ll learn to pay off his credit card in full every month.

Most iterative methodes only help if the inverse function is easier to determine than the one of interest. Square roots for example are much trickier than determining the square, so an iteritive trial technique makes use of the simpler inverse function. Testing a log requires raising the base to a (typically) non-integer power, which is tough if you don’t already have a log table.

A number of functions have polynomial expansions, don’t know of one specifically for logs, but certainly for trig functions. This reduces the calculation to a bunch of multiplications and divisions. The exansions are normally infinite, but can be truncated to produce whatever precision is required. Rather than a look-up table this is the means typically used by digital electronics. The expansions are also handy for solving some classes differential equations. Googling “taylor series expansion” should yield more detail than you want.

One could use the Taylor series expansion of log(x) around x = 1. (Note that that’s the natural (base e) log.) However, there could be approximating polynomials with faster convergence, and with better error behavior for a given fixed number of terms.

Computer and calculator algorithms for computing fundemental math functions often use Chebyshev polynomials. Chebyshev himself lived during the 18th Century though, so presumably these polynomials weren’t available before then.

Why? I don’t understand this attitude. Why would anyone want to make someone else go through “some of the hard grind I did” if it weren’t necessary?

I agree with robardin that it will probably just make the kid hate math.

  1. While Taylor series and Newton-Raphson are okay for doing a single number, tables were done by interpolation. Some starter values were worked out and then low-degree polynomial formulas were used to interpolate the rest. The values furtherest from the starting points then can be checked to by other methods to ensure that the interpolation worked well. Many tables routinely had errors in early printings.

Note that the CRC tables routinely had 2 tables for key functions. A low accuracy one and a high accuracy one. Why not just print the higher accuracy one? Because some people didn’t trust them! Weird, but if you’re selling books to a market, you gotta make your market happy.

  1. Mechanical calculators were around for a long time. These tremendously helped in the production and accuracy of tables.

  2. Logarithms in particular are nasty to curve-fit with polynomials. Forget Taylor series.

I waited forever as Firefox spun it’s wheels, then tried it with Safari. Instant response. Now, at least, I have a reason not to trash Safari.

Whoa - I’ve been wondering for 20 years why the hand method of square roots worked, and you just explained it to me in a single sentence. Lumpy, you rock.

Sorry; I think it will work if you open it in a blank window or tab (type “about:blank” if necessary before pasting in the javascript).

I’ve had to work out logs and square roots by hand before. While I can’t answer how logs got into tables before calculators, knowing a few simple logs can certainly allow you to estimate complex logs.

Because log (xy) = log x + log y, you can break down a number into various factors, and just memorize a few prime factors (2,3,5,7 work well enough).

For instance, let’s calculate ln 34576 (I just smashed my hand on the number pad).

ln 34576 = ln 35000 = 2.3log (35000) = 2.3(log (35) + log (1000)) = 2.3*(log 5 + log 7 + 3) = 2.3*(0.70 + 0.85 + 3) = 2.3*4.55 = 10.465

Actual answer: 10.451

With square roots, you can employ similar tricks. Multiply/divide numbers by squares until you get something that’s close to something you already know.

sqrt(34576) = sqrt(100)*sqrt(345.76) = sqrt(100)*sqrt(346)

18[sup]2[/sup] < 346 < 19[sup]2[/sup], so sqrt(346) ~ 18.5 => sqrt(34576) = 185

Actual answer: 185.9

Thanks — it worked with about:blank. Hmmm. I thought of another reason to hang on to Safari, since that one no longer applies, but the margin isn’t big enough to contain it.

$99? coughchoke But their Used link does have some reasonable ones … well done, thanks!

I don’t. :rolleyes:
i·ro·ny n.
1. The use of words to express something different from and often opposite to their literal meaning.
2. An expression or utterance marked by a deliberate contrast between apparent and intended meaning.
3. A literary style employing such contrasts for humorous or rhetorical effect.

All I can say is Wow :cool:

Some roots, by the way, can be extracted by using the binomial theorem to expand (1+x)[sup]y[/sup], where x is close to zero and y is anything you like (one-half, one-third). The plus point is that you need take only the first few terms of the expansion to get multi-digit accuracy. The minus points are: first you need to learn about binomial theorem; and second you have to fiddle around to get something suitable to expand. But, say, for finding the cube root of 37 it beats Newton-Raphson (by hand, at any rate) into a whimpering, bleeding wreck.

(999 = 37 x 27.
0.999 = 999/1000.
Binomially expand (1-0.001)[sup]1/3[/sup] to three terms, multiply your answer by 10 and divide by 3 and it’s jam today.)

If you can find an old math textbook—say, a Calculus or College Algebra book that’s at least 15 or 20 years old—it’ll probably have at least a page’s worth of log tables in the back. (Maybe trig tables, too.)

I think that that’s the method that we were taught. Thanks for reminding me.

Moments ago, before reopening this thread, I worked it out. The value I came up with, 3.3222218, cubes to 36.999998, and that was after going only to the term in x[sup]2[/sup]. Mind you, this particular instance is a really good example of the technique, simply because x is so small (-0.001). Others have to be taken to more terms.

There are lots of ways to estimate logs if you already know a few values, using the log rules log(xy)=log x + log y, log x^y=y log x, etc.

If I recall correctly, some log series also converge very rapidly. ln(1+x)=x-x2/2+x3/3-…, hence ln((1+x)/(1-x))=2(x+x3/3+x5/5+…) which converges rapidly for -1<x<1. To find log10((sqrt11)/3), for example, note ln(sqrt((1+x)/(1-x))=0.5(ln(1+x)-ln(1-x))=x+x3/3+x5/5+…, for y=sqrt(11)/3, x=0.1, ln y=0.100335 (to three terms), log 10 y=ln y/ln 10.

To find roots, rewrite f(x)=0 as x=g(x), and an initial approximation X1 might improve by calculating X2=g(X1), X3=g(X2), etc. if, near the root, -1<g’(X)<1.

The Newton-Raphson method, where if X1 approximates a root A of f(x)=0, X2=X1-((f(X1)/f’(X1)), etc., works if f’(A) is not close to zero, or if f’’(A) isn’t big.

A book on numerical analysis would give you dozens of further tricks. I haven’t used this stuff in years and probably misquoted some of the things I wrote above, but I’m sure others can correct them. :slight_smile:

For example, you wish to know the cube root of 7. You can graph y=x^3-7=0 and see it has one real root and two complex roots. Using Newton-Raphson, f(x)=x^3-7=0, f’(x)=3x^2, the root is probably a little smaller than X1=2.

x2= 2-(f(2)/f’(2))=2 - (1/9) = 17/9 = 1.8888…

x3 = 1.8888 - (1.8888^3 - 7)/(3*1.8888)^2 = …

So it boils down to simple multiplication, etc.