Complex calculations before computers

I was reading an article about yet another history of the first computer. Most of the time when I read articles/essays like this, they say that computers gave us the ability to do tedious computations and crunch huge sets of numbers.

People must have had a need to crunch huge sets of numbers before computers. What methods were used? How long would it take? What sort of things would be calculated?

Say there was a calculation that would take one of those early computers a few hours to complete. How would that have been done by humans? Would start calculating and get halfway through calculating then go home and come back the next day and calculate some more? How do you even half-calculate something?

I guess I’ll ask that you try to use very small words here, because I am pretty much math-illiterate.

Computers used to be people.

Back in the day, you had rooms full of people whose job was to do things like compute trig tables, various polynomial functions, and the like.

But remember also that problems tend to fill the void that technology creates for them. Nobody was working on stuff that required billions of computations before there were fast digital computers, because there was no way to do that kind of stuff, so there was no reason to work on it.

There were mechanical computers before ENIAC that produced tables of SIN functions and the like. And as friedo says, the word computer used to mean a person that would sit in a room with a bunch of other people, with each person getting a piece of paper, doing whatever basic mathematical operation they were assigned to the numbers on that paper, writing down the result and then passing that result on to the next person in the sequence that would repeat the process. Basically a ALU (the math doing bit of a CPU) made with people instead of transistors. Since each person just had one simple calculation to do, you could use large numbers of relatively unskilled workers.

Prior to that mathematicians would just do as you suggest, crank through the calculations over several days on their own or with the help of one or two servants. I seem to recall one of Napier’s contemporaries churned out tables of tens of thousands of logaritims good to 14 digits working all on his lonesome for several years.

Slide rules and logarithm tables were standard equipment for people who had to do lots of relatively complicated calculations (e.g. engineers) more than about thirty years ago.

(I vaguely recall reading science fiction stories by the likes of Robert Heinlein where someone piloted a spaceship with the help of a slide rule.)

Specifically, they were usually women, since at the time, scut work like that was the only way that most women were allowed into the sciences.

But friedo is right about problems expanding to fill the available technology. All of physics involves making assumptions and approximations, and as computational ability has increased (whether by increases in technology, new mathematical techniques, or just hiring a bunch more people with slide rules and abaci), some of those approximations get relaxed. For instance, let’s say I want to model the Sun. The simplest model would have it as a single sphere of uniform density, and that’s good enough for a lot of purposes, and can be easily done by hand. Or, for a little more work, I can model it as a spherical core with a few various discrete layers on top of it, still a one-person job. Or I can give it continuously-varying (but still spherically-symmetric) properties as a function of radius, and now I’m probably going to need a whole bunch of tables of mathematical functions at the least. Or I can take into account rotation, so it’s not spherically symmetric any more, and now I have to give everything in terms of distance from the center and latitude. Or I can consider the magnetic field, too, and now I need to start worrying about three dimensions, and things probably aren’t smooth any more, either. These are problems for which a computer is absolutely essential, and a solar scientist before such computing power was available would either have ignored those problems entirely, or just addressed them in a vague, back-of-the-envelope way (probably just enough to conclude that the effects were negligible for purposes of whatever it was he was studying).

William Shanks spent most of his life calculating Pi to 707 digits.

Poor guy. A mistake at 527 threw off the rest of his work. Still, his accomplishment is very important.

I’ve wondered about the people that calculated the first log tables. The work must have been so tedious.

Log tables were vital in science and math before computers. The people that calculated log tables deserved some recognition.

Only 75% correct, eh? Well, that’s a C or C- at best.

Great scene in the Ron Howard film Apollo 13 where calculations are double-checked - by 3 different men at the same time- for re-entry of the craft. Computes were shut down to preserve battery juice and they did it old-school. With slide rules.

Dad carried a slide rule in his shirt pocket. He was a science writer and used it quite a lot apparently.

Me, I’m utterly fascinated by the ability to do high math with one. But then, I failed Algebra I… 3 times. :confused:

Cartoonverse

Richard Feynman, in the “Los Alamos From Below” chapter of Surely you’re joking Mr. Feynman, described the process by which the Manhattan Project computed the necessary calculations for the first nuclear bombs. To someone from today, it is a fascinating glimpse of a wholly alien mindset.

They set up a room full of people with huge mechanical calculators. Each person was assigned to a single operation. One person would add two numbers, and pass the calculation on to the next person, who would multiply them by something else. The next person would take the square root and pass it on, and so forth. A single calculation could take months.

Feynman restructured the calculations to proceed in parallel. Each individual calculation still took two months, but they were able to turn out one finished calculation every two weeks.

The book describes how they set up error checking procedures, which were essentially subroutines that recalculated intermediate results when necessary. Occasionally, an error would be made within the error correcting subroutine, and they had to set up procedures to deal with that.

Today this could all be handled on a high-end programmable calculator in minutes.

Truly, Feynman had a dizzying intellect.

Must have been this guy.

He did, although that doesn’t show it. Frankly, the level of inefficiency on display before he made that change was grotesque.

Even more specifically, the word “computer” used to refer to a job description for a person, not a device.

I’m not sure if it was before or after Feynman, but I read somewhere that these types of calculations started at one corner of the room, and proceeded to the other corner of the room for the final result. However, the calculation was performed by (at least) two parallel routes, so that (at least) two final results were produced. These two results would be compared to provide a check against an error.

Very good answers, folks!

Please feel free to add more. BrotherCadfael’s story was a good one :slight_smile:

I recall back in the late 50’s, we had an engineer who had to solve 7 simultaneous equations. He disappeared into his office and didn’t come out for three days.

You young whippersnappers just don’t know what it was like back in the olden times.

Karl Schwarzschild, the guy who first put forth the notion of a Black Hole, did that work while he was on the Russian front in WWI where his job was calculating artillery tables.

Insane tedium (artillery tables) while being shot at and conjuring up the idea of a black hole in there somewhere.

Boggles the mind (he also suffered from a painful skin disease to boot which, I think, ultimately killed him).

That struck me in Foundation & Empire, in both parts, where members of the Foundation travel to Trantor - all the calculations for the Jumps are done by hand, and “the places to the right of the decimal point began to have greater importance” in the dense Galactic core than when calculating Jumps out in the thinly starred region of Terminus. Surprising that Asimov didn’t foresee the rise of computers, considering that Foundation has one of the first predictions of a modern hand-held calculator.

On making engineering calculations with slide rules:

Digital computers tend to hide the fact that most often values aren’t actually known to better than 2-3 digits of accuracy. Slide rules kept this fact at the forefront, and sometimes you had to sequence the order of operations carefully in order to preserve the accuracy.

In certain cases, the presumed superiority of digital computers actually masks and or causes errors, and here is an actual example:

This involves the control system for a water treatment system. It was desired to keep track of the operating time. So that the operator could easily verify that the timer was working, it was desired that the display have 1 second resolution and be updated every second. The system was intended to operate for decades, and the timer should not overflow for at least 50 years.

The original programmer decided to use a Float32 variable to represent the seconds of operation, allowing for up to somevalue * 10^^38 seconds. So the seconds counter would not overflow for something * 10^^30 years, thus overflow would never be a problem.

But hold on a second! (get it?). The precision of that 32 bit floating point is only a little over 7 decimal digits. So after about 4 months, when it adds 1 second to say 1.000000 *10^^7 seconds the answer as 1.000000 * 10^^7 (The same). The timer has stopped, because it has reached the point that one second is below the available precision. You can add 10 to that number, but you can’t add 1. The timer that was intended to work well beyond the expected lifetime of the universe will fail after ~4 months, and it did exactly that, which is how I got hired to fix it.

I recoded this to use a 32bit integer for the seconds counter. This will overflow after “only” 96 years of operation, by which time it will not be my problem. (but it will warn the operator if/when it gets close…cause after Y2K I am anal that way.)

There was a similar issue that involved integrating the reading from a flow meter to indicate volume. The programmer had not accounted properly for rounding error in a division, so it was under-reporting gallons by about 5-10%. This was also recoded to integer math, keeping track of and recycling the remainder to eliminate all accumulated error.

Now another option could have been to go to 64 bit floating point variables, but that was not an option on this PLC based system, and it would have only reduced the errors (OK, reduced them a LOT), while the integer solution reduced the math errors to zero in both cases, for half the memory resources (somewhat limited on a PLC)
Since then I have used the experience to avoid these sort of problems on other projects. It takes a while to make younger programmers see the issue, but those old enough to have used a slide rule only need a hint to get it.

On days when I yell at kids to get off my lawn, I worry that kids today are not only not learning how to do calculations, they are also failing to learn how calculations work.

Note: Float32 representation limitations are exact in binary. I used decimal approximations I know off the top of my head to illustrate the problem, because I’m too lazy to work out or research the exact numbers.

Wait till he gets going!