Complex calculations before computers

If you’re familiar with scientific notation, it’s like putting every number in that format:

1729.03 = 1.72903 x 10^3
-1899012530000000000000000 = -1.899012532 x 10^24
0.000045 = 4.5 x 10^-5

Suppose I only have 20 digits in which to write my numbers. I can get rid of the parts that don’t change, and write them in this way :

(s) (dddddddddd) (s) (eeeeeeee)

The first part (d) is assumed to be of the form d.dddd. The (s) is the sign of the number, or the exponent (I’ll use 0 to mean positive numbers, and 9 for negative ones). My numbers above can then be written as

0 1729030000 0 00000003
9 1899012530 0 00000024
0 4500000000 9 00000005

What if I want to write this number?

30.0000000008 = 3.00000000008 x 10^2

I have a problem. I can’t write it this way, because there’s not enough (d) digits to fit. That’s the problem you run into with loss of precision. Now, the upside of this is that I can write numbers that have more than 20 digits in them. This is useful if I need to work with numbers that might vary over a huge range (+/- 10^10000000 here). Really, though I only have the same total numbers to work with (10^20 for 20 digits), they’re just spread out more.

When I do math, I can add tiny quantities to large ones, and if they’re tiny enough, it doesn’t affect my large number. That’s useful in many engineering applications, but not when those tiny numbers are meaningful (as in the second-counting example). The moral of the story is to never use floating point values unless you have to and know what you’re doing.

(Side note: Computers store in binary format, and so get away with an extra bit of precision. Since a number in scientific notation doesn’t start with a 0, it must start with a 1, and you don’t need to store that 1).

The Manchester Small-Scale Experimental Machine, aka the Baby, was the first because it was the first electronic, digital, stored-program computer. That meant it wasn’t a loom (electric), it wasn’t an analog machine (digital), and it actually worked like computers work now (stored-program).

(Also, the Kenbak-1, from 1970, was the first personal computer. Everything prior to it was either not personal, or not a computer.)

(Seriously. I’ve seen people claim with a straight face that CMS running on a System/360 mainframe was the first personal computer. You can only believe that if you enjoy redefining words into a fine pulp.)

Others have answered this well, but the fact that they needed to is indicative of the issue I was illustrating: Computers can create an illusion accuracy that has unexpected holes.

But you might anyway, depending on the precise format your computer, compiler, etc. use. There are advantages and tradeoffs either way.

I think there are two different stories getting conflated here … This process was designed for the actual ‘modern’ computers–IBM provided some early mainframes for the Manhattan Project. The earlier discussions were about the previous process of using human ‘computers’ with mechanical calculators, which they used while they were waiting for the IBM equipment to arrive. Feynman claims that after his re-working of the process, the humans were finishing problems at nearly the predicted speed of IBM’s machines. (Conclusion: they were running Windows. :wink: )

What are the advantages to wasting space storing the constant leading 1? Surely, whenever you might want it to be present, you can just add it back, no problem.

If the one is implied, then you can’t represent zero without making a special exception, which you then have to check for in every operation.

Ah, good point. Though using a sign bit already implies some special case checking, and having zero as a special case along with that is really no different. But, yeah, that’s a good answer.

Keeping the initial 1 also lets you handle underflows a little more gracefully. Let’s say, for instance, that the smallest exponent allowed is -255. If you’re always chopping off the initial 1, then the smallest number you can handle is 1.000000e-255, and if you go any smaller than that, then you suddenly drop off the deep end, probably with bad results. If you’re leaving it, though, then you still have a little more wiggle room where you can go to 0.1111111e-255, or 0.100000e-255, or 0.000100e-255, or whatever, and just gradually lose precision instead of abruptly crashing and burning. Now, granted, if you’re paying attention and using good programming practices, you shouldn’t ever get close to your underflow limit (if you are, you need to restructure your problem), but sloppy programming does happen.

Sure, de-/subnormal numbers. But those are accounted for in IEEE floating point formats not by actually storing the leading bit of the significand, but by instead taking it to be implicitly 1 most of the time except when the exponent is the smallest possible (just as in your examples), in which case it’s implicitly 0 instead. Which is a fine system, but it still isn’t actually wasting space storing the leading bit, which was what I was wondering about the advantages of.

Actually, two astrogators recognized the mistake, but the senior one was unwilling to tell the captain (the third astrogator) that he was wrong.

“Starman Jones” had electronic calculators also, but they were pretty primitive with binary input-outputs. One of the things they were looking up in their tables were the binary to decimal conversion. In sounds pretty primitive, but in terms of discrete transistors and a computer about the size of a refrigerator using 1953 technology it wouldn’t be far off.

I also got the impression that regular ballistic calculations were pretty easy, but during a transition they had to make course corrections every few minutes to hit their window.

Of course it does sort of beg the question about why they didn’t use a bigger computer that didn’t require all the gymnastics to do calculations. Their starships didn’t seem to be hard up for space. The descriptions shouldn’t a lot like a ocean liner. I suspect they kept the older primitive computers to protect the privileges of the astrogators guild.

It wasn’t triple redundancy. They were doing a round robin with each astrogator doing a different sighting. The only reason that Max Jones knew that the Captain had made a mistake, was that he had an eidetic memory and was doing all the sightings in his head, so he realized that the Captain had transposed two digits. The second astrogator under corrected on his shot and by the time it came back to the Captain the error was huge, but still fixable, but the Captain applied it with the wrong sign.

Milutin Milankovitch spent decades (some of it under house arrest during WW1) calculating changes in the inicidence of solar rays at all latitudes of the earth over 10,000’s of thousands of years due to the earth’s tilt, pitch, and wobble. He was looking for an explanation for ice ages.