Mechanical Computers, limits & possibilities

How far could computers come if they were made of cogs & such like Babbage’s analytical engine?

If we assume these computers got the same level of R&D that computers did & some kind of cog based Moore’s Law where there’s an exponential increase in the number of cogs you can fit in a given space.

It’s my (probably flawed) understanding that everything a computer does can be broken down to on or off (0 & 1) so I expect it might be theoretically possible to do that with tiny mechanical levers or something.

So what would be the limiting factor in this? And would there be any weird problems or advantages?

Everything a digitial computer can be broken down to binary logic by definition. The physics of how transistors actually function to form logic gates is more complicated, of course, but from the programming standpoint it is all “0 & 1”.

The limiting factors in how complex you could make a mechanical “difference engine” ultimately boil down to mechanical friction, inertia, and machine tolerances. Any mechanical device of sufficient complexity is going to have enough slop in tolerances and losses due to friction and inertia that at some point it just cannot produce a calculation in a reasonable time. Consider an adding machine, which is actually nothing more than a set of rachets and levers forming a self-regulating abacus. To add together a set of numbers on an adding machine is easy enough, but if you try to operate it too fast the works will bind up or you’ll bend a lever (an all too common occurance).

An electronic computer, on the other hand, can basically operate at the speed at which signals propogate through wires, which is some reasonable fraction of the speed of light (0.95 c), so the real limit on how fast an electronic computer can make an individual calculation is essentially how fast you can deliver sufficient power to operate the logic gates, and at a thermodynamic limit, how fast it can reject waste heat such that tubes or transistors still function in normal ranges. Modern integrated digital computers perform many millions of calculations in parallel on transistors too small for the eye to see, so the actual limits are fundamentally how many circuits you can cram onto a chip and/or how finely you can break up a computation into individual calculations and then put them back together.

It should be understood that Moore’s Law, coined by Gordon Moore in 1965, isn’t based upon any kind of fundamental physical principles but is rather an observation on how fast manufacturing technology progresses to make integrated circuits finer and faster. It has nothing to do with thermodynamic limits, and in fact, there is a foreseeable end to Moore’s law based on thermodynamics (although it has been pushed out by some innovations in lower power computing and new materials), and the tendency by many to apply “Moore’s Law” to other areas of technological development such as electrochemical batteries or materials science is ill-informed and generally incorrect.

If we were limited to Babbage-type mechanical difference engines, we would not be able to do any of the amazing things we can do today, from highly responsive flight control systems to navigating trajectories within kilometers at interplanetary distances to modern communications. We’d basically be restricted to building room-sized adding machines that would need constant maintenance and repair to do any large scale calcuations.

Stranger

Apparently, Babbage’s Analytical Engine was ‘Turing Complete’ - which means in theory (and if you attach it to a sufficiently large storage memory), it can do anything that any computer can do - if you’re patient enough.

So realtime stuff would be out, but anything else is theoretically possible. In practice, some complex tasks might actually run up against the wear limits of the material before they are completely computed.

nm

We only needed a slide rule to fake the moon landings …

Not even close. In order to fake the moon landings convincingly, including fooling the tens of thousands of scientists, engineers, and technicians, we had to create all of the necessary technology and then send actual astronauts to the Moon to appropriately simulate the motion of dust in lunar gravity.

Stranger

I have seen studies showing that Moore’s Law held even before the invention of the transistor - it just wasn’t recognized.
The way to go forward would have been to build nanoscale mechanical devices on chips, similar to the machines in air bags that detect acceleration. However to do that you might have to develop the same technology used for ICs, so I don’t see it as a viable path.
The real problem is speed. A mechanical computer is always going to be bigger than an electronic one, and thus slower in transmitting data from one side to the other. The transmission is slower also.
Another problem - one of the reasons digital computers work as well as they do is that messy signals get cleaned up at each gate. Logic design books show you square waves coming into the inputs of gates - in real life there is no such thing as a square wave. How many mechanical gates would a signal have to go through before it was too messy for the mechanical latch to capture it correctly? Not many, I expect.
Then there is clocking. I can’t see a clock signal being routed in a mechanical system. It would be more of a dataflow, asynchronous design, and these have had lots of problems in the past, and have always lost out to synchronous designs.
Finally there is testing and diagnosis. We put extra logic into digital designs for this, which allows us to observe and control internal points. I don’t see how you can do this in a large mechanical system.

The programmers at Draper Labs needed computers to fake it.
But a 1955 Astounding version of the moon landing would have Buzz Aldrin working a slide rule on the way down. Tough to do in gloves.

I can’t recall ever seeing a speed spec for internal power buses. Now if you switch enough stuff at the same time you get droop, but that’s not quite the same thing. There is a time lag when you turn the power on, but not during normal operations.
Modern nanometer designs are much worse than old ones because fast transistors leak, so you can adjust your process parameters for fast hot circuits or slow colder ones. When you are working with a new process you spend a lot of time dealing with split lots which let you see what the tradeoffs are for what the fab is really supplying you versus what they promised to supply you.
Cramming in transistors would be easier if you didn’t have to worry about heat and routing.

Yeah he meant to say propagation delay. It’s from the transistors themselves and the speed of light in the wires is only a small part of it. The transistors have to actually switch states for their outputs to change and some physical process is doing this.

The E-6B “Dalton Computer” was designed to be used with gloves, although how well that would work with a pressure suit is another issue; you’d basically have to make it comically oversized to overcome visual distortion of the helmet and to allow for clumsy manipulation through thick gloves.

There was an an alternate ‘fast-track’ proposal using the Gemini capsule that would land a single astronaut on an open platform using purely visual tracking and automanual flight. Presumably it would have used some combination of horizon tracing, stellar alignment, and mass flow measurement to estimate altitude and remaining impulse to descend, and timing and dead reckoning to ascend and intercept back with the Gemini capsule, along with a healthy amount of luck. (This was separate from the proposal for a Gemini-based Direct Ascent profile which was also pitched by McDonnell Douglas.) Fortunately, Lunar Orbit Rendezvous turned out to be promising and the ‘Hail Mary’ approach was abandoned.

You certainly couldn’t put a mechanical computer of any useful capability on a rocket or satellite. Without electronic digital computers which could be easily programmed to perform arbitrary calculations, we wouldn’t have any modern technology. The only other viable alternative would be electrochemical computers, and that would require an ability to finely manipulate enzymes and peptides, which is not something we’re very adept at with our clunky monkey hands.

Stranger

For modern computers it is never an issue because even the largest server farm or supercomputing cluster doesn’t have enough power draw fluctuation to cause the external power grid or properly sized power generators to create significant load fluctuations. However, when they started building really large vacuum tube machines it was necessary to not only sequence powering up difference sections to prevent large transient fluctuations but also to distribute the computing load lest they see large feedback loops in the power system. Eventually they realized that just keeping all tubes powered at a threshold level both prevented large fluctuations and made the vacuum tubes last longer. The amount of energy than went into keeping those massive vaccum tube machines of the early ‘Fifties operating was phenomenal particularly given how ludicrously slow they were compared to even a smartphone today, but they were still a vast improvement over having rooms of manual ‘computers’ (human operators) crunch through sequential calculations on adding machines.

Stranger

Has no one mentioned that amplification will almost surely be necessary? A transistor is, fundamentally, an amplifier, as are triode vacuum tubes, electromechanical relays, and fluidic triodes in hydraulic computers. This leads to a need for a high-value energy source — e.g. electrical (or perhaps a wound spring in a mechanical system).

Amplification would be pretty easy to do with a hydraulic or pneumatic system; I can’t think of how you could do it with a purely mechanical system except by mechanical advantage of some kind. I think trying to perform any single line of calculation more complicated than, say, inverting a 3x3 matrix would become so cumbersome and potentially erroneous that it wouldn’t be worth the effort. Babbage difference engines and the later differential analyzers basically functioned by doing a series of simple calculations over and over.

Stranger

Sigh… haven’t you figured it out yet, guys? It’s all rocks in a desert! :slight_smile:

Mechanical amplification doesn’t sound too hard, to me. You have a big shaft with a force applied to it, and held in place by a small, low-friction pin (perhaps on rollers, or the like). A low-power movement to remove the pin, and you get a high-power movement of the big shaft.

As an aside, I saw once that someone built a mechanical computer capable of playing perfect Tic-Tac-Toe, entirely out of Tinkertoys.

In 2014, data centers drew 2% of all US power. Cite. But since you put in the right power management hardware before you start, you are right that it isn’t a big issue.
However I was talking about inside a processor, not the data center as a whole.
There are low power processors, and high power faster processors, and the latter draws a lot of power. We were limited in how many processors of the type I worked in we could put into the burn-in chamber by power supply issues. Also, you cannot do at-speed test at wafer probe because you don’t have access to enough bumps on the die to supply enough power to have the processor run at full speed.
The heat sinks on some of these things were really impressive.

That’s mostly true, but if you are sending signals across a large chip you use repeaters, which boost the signal part way there. When you do timing closure on a big chip the propagation delay for long signals is a big issue.

Transistors in digital logic are more switches than amplifiers, since the voltage for the output comes from the power or ground input, not from the input signal, which just does the switching.
But I think this would be a big issue for a mechanical computer, since the strength of the signal (whatever that means) will degrade over logic stages. That is exactly one of the reasons that digital logic, using either transistors or vacuum tubes, is such a big winner.

I see no one else has mentioned Drexler’s rod logic.

This could, in principle, be made arbitrarily small–down to the size of atoms. The basic idea clearly works, and supports all the normal binary operations we expect. Less clear is if it could ever be built at a molecular level.

There is no inherent limit to the efficiency of mechanical computing beyond thermodynamic limits. A reversible mechanical computer can be arbitrarily efficient. If you need to erase results, you’ll need to expend some energy (though very little).

Even nanomechanical systems will likely be slower than current electronic or optical ones, but they can (in principle) be made so small that they’d still be advantageous for certain low-power applications. A cubic micrometer of molecular rod logic, even running at mere megahertz, represents an immense amount of computing power.

All of this however depends on molecular assembly, which has yet to be demonstrated beyond a handful of atoms.