How far from the Empire State Building to San Francisco City Hall, based on the Coast and Geodetic Survey lat-lons of 40.748329N 73.986067W and 37.779341N and 122.418135W?
Lambert’s distance formula
would be accurate to a meter or two, and the lat-lons are less accurate than that, so might as well use it. Filling in a couple of blanks in the Wikipedia article
tangent of reduced latitude = (63565838 divided by 63782064) times the tangent of actual latitude
a = 6378206.4 meters
f = 216226 divided by 63782064
I assume you wouldn’t do this calculation on any 1950s computer – programming the computer would take longer than doing the problem manually, assuming you have a big calculator and multikilogram volumes of trig tables and logarithms. (I guess that’s why the formulas have those half-sigmas in X and Y? To help the logarithm user?)
But if we’re publishing a book of distances, with hundreds of such calculations to be done, we do want to program the computer. Once we’ve done that, how long would an IBM 701 take to do one distance calculation? 0.1 sec? 1 sec? 10 sec? 100 sec?
How long would the programming take? The program would be a stack of punchcards? To be read into drum memory? It would have to include the algorithms for calculating sines and cosines? Did the drum memory have room for everything, or would the algorithms have to go on tape?
“How long would the programming take” and “How long would the computation take” are sort of complementary questions, especially in the era you’re talking about. A naive implementation of the program could be done fairly quickly, but it wouldn’t be the most efficient calculation. Nowadays, you’d say “Fine, not a problem, it just means the program will take a thousandth of a blink of an eye to run instead of a ten-thousandth”, but in 1950, you absolutely wanted to make sure that your code was efficient.
The answer also depends on how much of your work is already done. The formula uses trig functions: Has anyone else already written an implementation for the trig functions on your computer, so you can just re-use their work? Or do you need to create your own trig functions? Or maybe you want to do that anyway, for the sake of efficiency: If the implementation you already have is good to 8 decimal places, but you only need 5, maybe you should write a new function that’s more efficient at the expense of accuracy.
To give you an idea of the sorts of accuracy vs. speed vs. understandability tradeoffs, in the late 1990s, the programmers for Quake created an entirely new implementation of a function to find 1/\sqrt{x}. It was, well, interesting. But it was also very quick, and good enough.
I’m just guessing here, but I suspect that by the time Fortran came along (late 1950s) the trig functions would have been built in. Before that, it would have been difficult. In 1955, Penn was given a Univac 1 and the lab I worked in had a program written to solve systems of simultaneous ordinary differential equations. The actual memore consisted of 1000 12 bit double bytes. The bytes were only 6 bit on that machine. It took 80,000 double words, about half of which were concerned with reading the right pieces of the program at each step into the active memory. What a mess! Fortran changed a lot.
There is code online for that. Other than the trig functions (and square root) it’s trivial, I could write the assembly…
const R = 6371e3; // metres
const φ1 = lat1 * Math.PI/180; // φ, λ in radians
const φ2 = lat2 * Math.PI/180;
const Δφ = (lat2-lat1) * Math.PI/180;
const Δλ = (lon2-lon1) * Math.PI/180;
const a = Math.sin(Δφ/2) * Math.sin(Δφ/2) +
Math.cos(φ1) * Math.cos(φ2) *
Math.sin(Δλ/2) * Math.sin(Δλ/2);
const c = 2 * Math.atan2(Math.sqrt(a), Math.sqrt(1-a));
const d = R * c; // in metres
For the trig functions and sqrt it occurs to me, before high level languages with math functions, did a software library mean a literal library? Was there a bunch of books where you could look up the assembly/machine code for sqrt or atan2 and just copy it by hand?
Since each machine – even from the same manufacturer – had a different architecture, this seems unlikely. This was one of the biggest things about S/360: compatibility across the line.
Most likely, the library would be a deck of punch cards with the machine instructions for performing the trig functions / square root on them.
I don’t know if the 701 had any kind of a subroutine branch instruction, but if it did it would be easy to integrate several functions into your program by simply copying the card deck.
The algorithms were well known. I had both generic algorithms, which I could code, and some specific algorithms, already coded for the microprocessors I was using. The stuff was in books: this was before the internet.
Mainframe types shared tapes.
The Quake algorithm was not well known, because it used a mixture of floating point and fixed point math to implement logarithms. That’s at the intersection of three bodies of knowledge rather than just two.
Though that’s much later. By then there were perfectly reliable standard C library functions for trig and such. It’s just they couldn’t be relied upon to be fast enough for real time applications (particularly if you didn’t care so much about precision). That was still the case when I started in games in the early 00s. It was a long way into my career before the old timers were convinced they couldn’t do better than the standard C functions (I bet there are still games shipped today with with franksSuperFastATan() hiding somewhere in the code )
In case it wasn’t clear: the calculation I asked about is a distance calculation on the spheroid, not on the sphere. (Although the sphere calculation is the first step in that spheroid calculation.)
Were the earliest computers like ENIAC and the MARK1 designed specifically for long-range artillery calculations?
I know they had to factor in wind and other considerations in addition to mere distance accuracy,which greatly complicated the equations, but wasn’t this work that would have been built on by the 50s? Work on guided missiles ramped up quickly because of the Cold War so this would have been a pressing issue, not a mere scientific curiosity.
I’m sure too - analog computers had been used for the purpose before electronic ones. I’m asking whether specialized routines for electronic computers had been developed given their rapid advancement from the 40s.
The late 50s are a unique narrow slice of computer development. The 709 had only a 2 year life span. Systems like the Univac 1000 and IBM 704 gathered the then current state of technology into packages that could be manufactured and maintained in a commercial environment. Once deployed their application began to better define the computer market. Like the problems at JPL The 709 addressed this with the Data Synchronizer and specialized instructions for synchronizing and storing bulk data (DMA). The 704 was a numeric processor. The 709 was a numeric and bulk data processor. Much of its’ design came from the Sage computer.
Of course the other influence was transistors. Transistors were slow and not easy to manufacture. Philco did introduce a short lived computer using it’s surface barrier technology. Fairchild developed a planar process that made mass production of high performance germanium transistors a reality. IBM used that technology for it’s 01 and 02 complimentary pair that made possible memory access in under 1 microsecond.
So, when talking about 50s computers we are looking at a period of 10 years between the computer disrupting the marketplace and maturing to a commodity product. With the introduction of the IBM 360 the computer had become a commodity. I was working in R&D by then. The company sent out an internal memo - ‘The stored program computer is no longer a market differentiator’.
Sort of. That stuff didn’t really settle down until c99 – the same time as Quake. And the direction for c was to compete with Fortran for accuracy and reliability – in the early 90’s I was still working with custom trig implementations for low-accuracy speed.