A somewhat fanciful question about analog computing

The root of my question is a video I watched today comparing some things between analog and electronic computers.

My question is this; how large would an analog computer have to be to be comparable to a modern laptop.

Now, I don’t think it would be completely comparable. I don’t think speed of operation would or even could be anything like a laptop top.

But what about complexity of computation?

So, some considerations and constraints, I’m obviously not looking for speed, I don’t think thats possible by todays standards.

By analog computing, it can be mechanical linkages or fluids and valves or any combination of those or whatever else you can think of, as long as its something not electronic or quantum (i think I should point out that electrical components are acceptable for motors or cooling pumps and the like)
Miniaturization is acceptable as long as you don’t need anything stronger than a jeweler’s loop (loupe?) and maybe watchmakers type or sized tools for maintenance and repairs.

I also understand that rate of operations is going to be (compared to today’s standards) agonizingly slow. This is of course a necessity due to the nature of the machine. Both in the brute limitations of latency inherent in the components and also because we don’t want it to melt into slag right off the bat by running it too fast and over heating it. (Yes I recognize heat at least as a huge concern)

So, large building, city size? Continent size? Planet sized? 42?

See Turing Machine.
Any computer program that can be processed by a laptop, or the fastest super computer, can be processed by a Turing machine.

It could be tiny.

It depends on what you consider an analog computer to be. I don’t think there’s a design concept for highly complex analog computers like that, they would have to be components of a larger digital model with the analog components performing arithmetic operations.

You’re conflating two independent concepts: analog computing, and non-electronic computing.

There are analog electronic computers, and in fact camera sensors and flash memory could be considered a form of analog computing. And fluid/mechanical/etc. computers can be digital.

Analog computers are highly specialized machines; no one ever developed a general-purpose analog machine. They were good at things like solving differential equations or computing integrals. As a simple example, to compute an integral you could connect the input to a linear valve, and measure the amount of water that has flowed into a tank over time. You can do the same thing with a vacuum tube or transistor and a capacitor.

You seem to be thinking then of non-electronic digital computers. Peter Morris points out that the Turing machine demonstrates that a computer of any size can simulate any other computer. This is true, although Turing machines are “programmed” so differently from conventional computers that they are hard to compare.

A recognizably modern microprocessor (say, an Intel 8080 or MOS 6502) can be implemented in a few thousand transistors. Relay computers are electromechanical, which I think fits your requirements, and would require about a cubic inch per transistor. A 6-foot square would be approximately enough to build one, though it would require clever packing.

It also wouldn’t have any memory, which is a significant difficulty. A full 65k of memory requires half a million switches. Perhaps these could be packed into a cube instead; still several feet on a side.

The limited memory means that problems your laptop could solve in one pass would have to be split up. You don’t seem to care much about speed, so perhaps multiple passes is ok.

If not, then you will have to add a great deal of extra memory. If we are still going with a cubic inch per bit, then 8 GB of memory will require a cube 4000 inches on a side, or 333 feet. Somewhat more actually, since the addressing and data wires start to take up significant space.

Persistent storage is also a problem. Punchcards seem an appropriate medium here. 1 TB requires about 8.3 billion cards, which at 2.5 g each would weigh 20,000 tons, and take up perhaps 30,000 cubic meters.

So, about the size of a large building. Not city sized or worse. But horribly slow, of course.

This. Analog/digital is an entirely different way of categorizing than electronic/non-electronic.

But I disagree about general purpose analog machines. They did indeed. The machines would comprise a variety of building blocks called “operational amplifiers” or “op amps” with which they would carry out mathematical operations including addition, subtraction, multiplication, division, logarithms, exponents, square roots, differentiation, integration, and more. Typically the blocks would be strung together with patch cords, like an old telephone switchboard. Op amps are still very much in use today, though generally not to achieve a computational result. They remain building blocks of many analog circuits.

I still have a few component level analog computers that look like plastic DIP integrated circuits but about twice as big in every dimension. They give an output equal to XY/Z, so three input pins and one output pin, in addition to the usual power supply pins. I used them in a small product development project about 30 years ago. I’d argue that XY/Z is intended as a somewhat general purpose, if simple, analog computer.

There’s a splendid discussion at Analog computer - Wikipedia including many photos of general purpose and special purpose analog computers, including electronic and otherwise.

I talked to someone within the past year or so who was pushing for new research in this area. Something about lower-energy computing and analogue artificial neural networks. Our brains are efficient, etc.

I’m not equipped to evaluate anything about this and don’t have a lot of details that I remember anyway.

The key question isn’t how big it would have to be, but how small. Mechanical computers get faster, more efficient, and more reliable as the size of the components decreases. And in fact, if you extrapolate down to nano-scale components (which are actually made using similar techniques to how one makes modern electronic computers), their performance would be comparable to that of electronic computers. Though of course that’s based on extrapolation, which assumes that nothing important changes at vastly different scales, which is probably false.

*The Three Body Problem * has a digital computer made up of people. I was trying to teach a second grade class a little bit about gates, and I had the class simulate a small adder. All examples of non-electronic digital circuits.

It’s just a matter of terminology. When I said “general purpose”, I was including concepts like stored programs. That’s probably a little unfair: there were general-purpose digital computers that didn’t have stored programs (like the first generation ENIAC). But they only only existed for a very short time, since it was recognized almost immediately that stored programs are a significant advancement and greatly enhance the practical generality.

A collection of analog blocks strung together with patch cords doesn’t feel like a general-purpose analog computer to me, just a streamlined way of building a specialized computer. An analog computer that could reconfigure itself on the fly, or even reconfigure itself based on its own operation, would be a totally different beast.

I don’t know specifically what you’re referring to, but the idea isn’t nonsense. In the early days, neural nets were using the same high-precision arithmetic as computers normally use: 32-bit floating point numbers. CPUs and GPUs are good at this stuff, so it was a logical choice.

But it turns out to be overkill most of the time. Neural nets are resilient to error, and can tolerate lower precision. So on the evaluation side (called “inference”), hardware has been supporting lower precision over time: 16 bit, 8 bit, and the most recent I’ve heard of is 4 bit. Lower precision means fewer transistors and lower power.

But one could go analog, too. The output can be noisy; as noted, NNs are tolerant to error. So an analog device that gets in the same ballpark as 4 bit ops might be completely reasonable.

The trouble as I see it is that semiconductor devices are at their most efficient when totally on or totally off. There is negligible current when off, and negligible voltage drop when on–both imply negligible power. An analog device in an intermediate state will not be as good.

However, digital circuits at a small enough scale, and switching at a high enough speed, start to look analog. So maybe there is a scale where analog makes sense again.

I worked with these. They don’t look analog in the analog circuits sense, but the concept of modeling signals as nice clean square waves goes right out the door. I proposed in a column that we stop teaching kids that gates have nice binary signals going in, and start to show the real waveforms from the start.
When I was an undergrad in TTL days we did have nice clean waveforms.

BTW, as an aside, people are implementing traditional analog stuff like radio circuitry as digital, since you can then integrate it with the rest of a system, and so you can get the benefits of integration.

Reconfiguration seems like a digital process though, so at best you might get a general purpose hybrid analog/digital system

If you want to go primarily analog for most operations (there will probably have to be some digital aspects to a practical, flexible device), instead of fluids, rods and/or gears, the most obvious non-electronic approach has to be the lightest touch. I suspect that interesting things could be done with non-linear[sup]*[/sup] optical media, to the extent, perhaps, that transient computational structures could be established in semi-tabula rosa media, meaning that many basic processes would be blazingly fast due to reduced CPU overhead.

There would still be an electronic element to such a device, though, because you would probably want to use semiconductor lasers rather than gas type (“cleaved coupled cavity” lasers would be most effective for reliable high-speed computing).
*non-linear media has an index of refraction that varies according to the intensity of the light passing through

I only can contribute an anecdote (other than to stress that the analog/digital distinction in computing is something very different than electronic/mechanic):

When i was at college, we had a prof for control engineering who was kind of a luminary in his field. This was around 1990, and one of our laboratory courses was operating a fucking analog computer. This was really weird to us students at the time, but interesting and we already took it as a historical lesson at that time.

The test for that course was one of the most difficult in my studies, because of the hard basic mathematical groundwork. Oh my, all those multi-layered fractions we had to reduce :(…

Back in 1985 or so when I was in college, there was an analog computer in the back of one of the engineering labs. I don’t think we ever did anything with it, but one of the professors talked about it enough that I knew what it was.

Did it also have a big representational switch board, like a big white board, with inputs and outputs next to symbols on the board representing integrators and such? That’s how I remember the interface, but I have no idea how the hardware behind it really looked.

Dr. Strangelove is right, but I’d like to add my two cents:

An analog computer is a computer that is programmed by setting values to a set of analog elements (electronically: transistors, resistors, inductors, capacitors, voltage and current sources) the arrangement of which provide an analog to a mathematical expression. Activating the assembly solves the equation by evolving to an end state. The values of variable parameters at the end (such as voltage or current) is the solution to the math.

Analog circuits are just circuits that have a smooth continuum of “output” values as “input” values change rather than being restricted to only a limited number of states (usually two, but multi-level digital logic is a thing) in a digital circuit. However, fundamentally, all digital gates in a computer are analog circuits (well, even more fundamentally everything’s a quantum system), but the smooth continuous values are made to be probabilistically negligible and the “high” and “low” values almost invariably the result of inputs. But all the elements are analog, and analog effects can occur, usually screwing up the result.

Electromechanical relays can be miniaturized to (theoretically) build an all-relay computer (look up carbon nanotube crossbar switches for the micro version of an electromechanical relay), including memory. But why would anyone waste time building a $B manufacturing plant for this technology when there is already a perfectly fine CMOS plant? (though in memory, they may have some utility in closing the processor-memory gap).

Finally, as a note, it isn’t the case that analog computers are a relic of the past. The D-Wave quantum computer is, in fact, an analog computer.

That’s kinda’ unfair, given that the individual transistor elements in modern chips are tens of nanometers. There was some talk of using the same micro-fabrication techniques used to make computer chips to make micron-scale mechanical parts for special purpose (i.e. ultra-radiation resistant) computers.

My thread on building a Babbage Analytical Engine using 20th century watchworks components. http://boards.straightdope.com/sdmb/showthread.php?t=879193

Well, maybe! But one can imagine that at least in part, the configuration itself is also analog. Take a simple if/else selection: digitally, it might be “if X, pick A, else pick B”. But an analog computer might support all intermediate values, blending A and B as necessary.

For a more advanced example, suppose one had a “calculus” unit–if you pass it 1.0 it computes the derivative, if you pass it -1.0 it computes the integral, and 0.0 is just a passthrough–but intermediate values like 0.5 would compute the half-derivative.

It would be hard to do everything this way. What would an analog “instruction” look like? The registers might be analog, but ultimately there is going to be a discrete selection of which register you pick. So you’re right that there would be some digital elements, but it might be possible to carry the analogness surprisingly far throughout the system.