Mechanical Computers, limits & possibilities

I wonder if there could be any path to molecular assembly that did not first rely on development and use of electronic digital computing.

That is, I don’t think you could bootstrap your way to efficient mechanical computing, unless you had ***lots ***of time (to do your initial work on cruder mechanical devices). Some things which require speed, like controlling and measuring chemical reactions) might not be doable at all.

I’m thinking of environments inimicable to electronic computing. (Take your pick of sci fi scenarios: squid beings living in the ocean; hostile aliens capable of detecting electronic computers in action; etc).

I’m also guessing that if you could build this rod computer at nanoscale, it might still need an electronic interface to send inputs, drive it, and return outputs.

The signal strength doesn’t necessarily have to degrade in a mechanical computer - the machine may just require mechanical power input at each stage - switched by the smaller mechanical logic outputs of the previous stage (just like electronic logic stages require electrical input, switched by the smaller electrical logic inputs) - transistors, whilst they are amplifiers, are also inherent consumers of input power; they create losses in the system just like mechanical friction creates losses.

**Photonic Computers **

This may be our best hope in attaining a quantum leap in computer speed and performance, but there are hurdles to leap before they become a practical reality.

This is precisely the point I intended to make. Perhaps insertion of a timely ‘to function effectively as switches’ would have eliminated any ambiguity in my phrasing. And of course amplification need not be (indeed should not be) linear.

BTW, I don’t think mechanical advantage (e.g. lever) is sufficient to provide necessary “amplification.” (It’s ironic I need to remind that, since it’s you guys who staunchly defend 2nd LOT against me when I make wild speculations in a physics thread! :stuck_out_tongue: )

Sure. Living organisms do it all the time at the intracellular level using ribosomes and proteases as assemblers and disassemblers for proteins, and in ways that are so complex even our best digital computers can only simulate most simple operations. However, doing to “consciously”, e.g. building a purpose-specific protein to perform a specific type of action or calculation is vastly beyond our capability to perform deliberately. (To be fair, we’ve constructed wholly synthetic proteins in a lab, but largely to examine the methods and novel emergent properties; we can’t really predicte what a protein will actually do de novo, and doing so will probably require fundamental breakthroughs in molecular biology.) Of course, our brains do some form of computation that is fundamentally based upon molecular action; while they are not computers any conventional sense of performing discrete calculations following an explicit (software) or implicit (hardwired) algorithm using finite state logic, they can perform computational-like feats with repeatability and great accuracy when conditioned to do so.

One interesting recent discovery is that cephalopods can ‘edit’ their RNA expression to literally change how proteins are built without underlying changes to the genome. Despite some pop sci speculation as to how this may change evolutionary adaptation we don’t really understand how this is caused to happen and to any extent it may be controlled by some aspect of the sensory response system (although given that cephalopods have a much more distributed ‘central’ nervous system than mammals of similar complexity) but it presents the potential that life could develop the capability to be self-altering and to modify itself or its environment without going down the route of using stone tools and similar terrestrial hardware.

Stranger

Signals wouldn’t “degrade” in a mechanical computer any more than a physical one. 1 or 0 is signaled by gears or levers or whatever engaging or not engaging to allow a driven wheel/axle to turn or not turn - much as a signal in a computer simply allows the transistor to pass or not pass current.

The limiting speed would be such details as how fast the mechanical levers or cogs can engage or not; whether at too fast a speed, the gears will grind, shear, etc. analogous to electronic devices, smaller, lighter pieces have less momentum and can move faster and require less energy to complete their cycle. (But would be more fragile).

People in my center at Bell Labs were working on this (and photonic switching) 30 years ago. I don’t see it as one of the possibilities cited for the next stage, so I guess they haven’t panned out very well.

I’m surprised there has been no discussion of actual mechanical computers. Especially th e Zuse Z1:

https://en.wikipedia.org/wiki/Konrad_Zuse

and the Harvard_Mark_I

https://en.wikipedia.org/wiki/Harvard_Mark_I

though I appreciate it could be argued that the latter was electromechanical rather than mechanical.

It’s so well-known as to be trite, but if we’re mentioning mechanical computers that have actually been built let’s not overlook the Antikythera mechanism. Some speculate this was designed by the great Archimedes himself. :eek:

What about pneumatics? Or hydraulics? Basically mechanical computers but there would be long hoses that let you interconnect far off sections.

Would still be a pretty slow computer.

One thing that hasn’t been mentioned is that there are circuits that mechanical computers used that there is no direct digital equivalent for. Like you can have yourself an analog integratoror differentiator. Now, analog electronic computers have all this, and it probably worked better, I’m just saying that you can make up for *some *of the terrible slowness and weakness of a mechanical (or electrical analog) computer by getting more done per operation.

You know, if we lived in a world where we never could invent electronic computers, could we resort to using live neurons cultured from humans or animals as our computers?

I mean, like everything else, working out how to do this would be much easier if you had powerful electronic computers. And the “neuron chips” would need to be made using processes similar to what we use to make silicon chips…

Yes and no. Pneumatic/hydraulic computers will suffer from the same problem as electronic computers - a “square” wave is a compound of multiple sine waves in theory, and over distance will become less “square” as higher frequencies attenuate IIRC. (This problem messed up the first transatlantic cables). Thus simple on-off degenerates into a sine wave where the exact timing of 1 and 0 become more difficult to ascertain. (Plus of course, distance is a more crucial factor when trying to speed up things - there’s a “speed of sound” limitation to pneumatic actuation, for example…)

The problem with electromechanical computers is how fast relays can flip. Hence why there was a quantum leap in computing power when relay switches moving back and forth was replaced by electronic valves (tubes) and then transistors which could flip back and forth (on and off) several orders of magnitude faster, even in the early days.

The clock speed of the computer cannot exceed the flip time of the gate elements. The next clock cycle needs to wait until the settings of the previous clock cycle have definitely settled to a steady state.

The trouble with mechanical integrators or differentiators are that if they are (typically) analog devices, they are typically limited by a very low resolution answer; slide rules were good for two to three digits. My dad was able to use Jedi math mind tricks to calculate the extra digit to turn a 2.5-digit accuracy slide rule result into a 3.5 digit accuracy; I never had to, because electronic (digital) calculators came along and could do the same calculations to 8 digits faster than I could punch in the inputs.

(Electro)mechanical analog integrators and differentiators etc may have had a good application in process control, where before digital process control, speed was more important than 10 digits of accuracy.

(I remember a software add-on for old old computers, to turn your 386 into a 486. What it did was replace the software for floating point calculations library on a 386 with something that only did 4 digits not 8 digits accuracy. While the algorithms were not quite n^2 , calculating less digits probably took 1/4 the time; and since video did not exceed 1024 pixels that made a huge difference when calculating screen displays for videogames in real time… While on a 486 the hardware DX chip made the calculations even faster…)

As Stranger said, life is an existence proof.

Beyond that–it’s hard to speculate since we don’t yet have robust molecular assembly even with electronic circuits. And no existence proof of “rigid” nanotech–Drexler et. al. usually suggest some covalent carbon bonds as the main structure; basically diamond. But we don’t yet have examples of tiny, atomically perfect diamond machinery (even in nature).

Is there a more progressive path to fast mechanical computing? That’s basically the plot of a few steampunk novels; somehow Babbage’s engines get faster and smaller until they are near parity with electronic ones. I suppose it’s possible; a mechanical computer with the density and precision of a pocketwatch could probably match the earliest electronic computers in computing density. And that might be enough to drive further advances into the micro realm.

As for sentient cephalopods, another difficult point is that this is all so dependent on economics. We’ll probably hit the economic limits of silicon before the physical limits–at some point, silicon fabs are just going to be too expensive to justify the next incremental improvement. Already, progress is very slow. Speculating on how some other civilization would pay for the immense investment needed for fast mechanical computers is… well, science fiction.

Yes, I’ve seen the documentary

I’d actually argue that that’s more of a feature than a bug. Any calculation will only ever be as good as your inputs, and it’s quite rare for any of your inputs to actually be any better than three digits (because, of course, you’re usually reading it off of a device that has exactly the same sort of limitations as a slide rule). In that case, using an 8 or 12 digit electronic calculator doesn’t actually give you any increase in real precision; it only gives you the illusion of precision.

Both are covered by fluidics, but that discipline never really got off of the ground except for a few, odd applications (hot tub jets?).

There was a Heinlein character, “Slipstick” Libby, who did navigation with a slide rule but inside the rocket so no gloves were needed.

Of course, Heinlein’s rockets were powered by atomic piles and we know how well that worked out

This is the sort of thing I find intriguing; Babbage’s analytical engine scaled down with lightweight gears etc.(brass of course) I think we might struggle to reach a clock speed of 20 Hz, but on the other hand it could be a decimal machine and so get more done per clock. However it’s still going to be glacially slow and in steampunk-world would be reserved for running important calculations rather than word-processing.

Has anyone ever attempted this for real?

Slipstick Libby actually did all his calculations in his head; no slide rule used at all. You may remember the scene where the mechanical calculator broke down and he completed the asteroid’s manuever by doing the calculations that way. I can’t remember if it said so explicitly, but I expect that before the breakdown, he was crosschecking the calculator with his own calculations.

Of course, in reality, most of the people who display that sort of incredible calculator-like capability also suffer from serious affective problems (autistic spectrum disorders) and are generally not very functional from a social standpoint. Brains can do computer-like things, but brains are not computers in the normal sense of the term.

Stranger

As mentioned Slipstick Libby didn’t use one. I was thinking of the cover for the May 1951 cover of Astounding with a big slide rule illustrating a story called Galactic Gadgeteers by Harry Stine. IIRC, the problem to be solved in this story was to create a perfect square wave, which is pretty funny for those of us who have looked at high speed signals.

Hell, creating any kind of wave that reliably goes up and down is pretty funny to anyone familiar with high speed signals. PCIe Gen3, for instance.