Field-programmable analog arrays and of course conventional analog integrated circuits are certainly available to researchers. An interesting question is whether anyone can cite instances where such hardware was used and proved superior to, say, an ADC+FPGA combination.
There is little basis for a blanket, unqualified statement that analog computers are superior for solving, analyzing and manipulating differential equations. If they were superior they would be in widespread use. They are not, and there is no expectation they will be anytime soon.
The cited abstract is one research effort by a few graduate students. It is limited to 8-bit resolution and a few hundred grid points, even using custom analog silicon. The complete paper is here (PDF): http://isca2016.eecs.umich.edu/wp-content/uploads/2016/07/8B-3.pdf
The abstract describes analog processing as an answer to the “dark silicon” problem, which is the inability of digital processing to continue upward performance scaling due to heat/power issues. This problem is caused by the end of Dennard scaling and Moore’s Law, plus constraints of Amdahl’s Law, and has been understood for a long time:
A pivotal paper discussing this is Dark Silicon and the End of Multicore Scaling (Esmaeilzadeh, et al, 2011): ftp://ftp.cs.utexas.edu/pub/dburger/papers/ISCA11.pdf
So this has been understood for a long time but almost nobody (much less a large group) believes analog methods are the general solution. Even for the narrow case of solving differential equations, it remains unproven whether analog methods can produce a practical commercial device that is clearly superior to current digital methods.
Even given the above constraints, digital methods still have various possible paths forward. These include increased use of application-specific digital logic, on-chip FPGAs, increased use of vector instructions, increased use of more sophisticated GPUs, and different instruction architectures such as VLIW: https://millcomputing.com/, carbon nanotube fabrication, optical and even quantum computing.
Intel has already shipped CPUs with integrated FPGA accelerators. These essentially allow end users to program algorithms into application-specific hardware: Intel Begins Shipping Xeon Chips With FPGA Accelerators
IBM’s upcoming 200 petaflop supercomputer will use a combination of Power9 CPUs and nVidia Volta GPUs. It will not achieve this using analog processing: New 200-petaflop supercomputer to succeed Titan at ORNL - Oak Ridge Today
I’m not saying analog accelerators will never be used commercially, and Optalysis looks interesting. However it is (at best) a narrow, special purpose optical analog device. As currently envisioned it could never replace traditional digital computers. In theory it can solve certain differential equations very rapidly with low power consumption. A possible application is CFD: Computational fluid dynamics - Wikipedia Optalysis at least is a real company (not just a few graduate students) producing real hardware and they have real funding. But it is essentially investigational technology.
http://optalysys.com/
Neuromorphic computing is (by some definitions) analog, by other definitions hybrid. In the far future it conceivably might be developed to handle useful problems. However it’s not available today and nobody knows if it will be practical or simply “peter out” like so many other concepts: Neuromorphic engineering - Wikipedia
I was not making a claim that it was a better for a general computing need or even applicable to most general computing needs, or that it would replace traditional digital computers.
While I will say that there are advantages that do make analog computers are superior for solving, analyzing and manipulating differential equations. As an example when trying to calculate continuous time or values.
As the SD lacks inline images or TeX formatting I am just going to provide a cite.
http://isca2016.eecs.umich.edu/wp-content/uploads/2016/07/8B-3.pdf
Page 59 will provide an example and the rest of the deck covers advantages and challenges for both.
I am not saying one is better than the other in all cases, but that it is simply a matter of horses for courses.
If I can find a non-linear example that is not behind a paywall I will try to share it, but that is where Analog really starts to have an advantage.
I will partially agree in that not being limited in time domain resolution might very well make it faster. But you’re trading off one problem for another - you’re going to have shit numerical accuracy. The reason the paywalled paper was limited to 8-bit is because the circuits interfere with each other. The more complex the analog equation solver you build, the worse the interference will get. There are numerous sources for it, and it is not possible to increase your numerical accuracy to remotely close to what a digital system will give you. 24 bit mantissa (what bog standard IEEE 32 bit float gives you) is 65,000 times as accurate. It’s going to be really tough to make the analog circuitry that many times more accurate. Probably impossible.
Another rather huge can of worms you are opening in the analog world is that parts of analog designs interfere with other parts. It creates design coupling. That’s hugely bad. I’d say it’s even worse of a problem than the lack of numerical accuracy. It means you cannot split up the task of building a large computer among hundreds or thousands of people, because any piece nearby to any other piece will interfere. Digital systems can be designed as isolated modules for the most part, which is why we have such complex designs commonly available today.
I’ll try to make my point a bit clearer.
The duality between theoretical natures of analog and digital approximations is very close. You can’t make either work to arbitrary accuracy, in either value or time domains. Digital signal processing is limited by the sample rate, and analog systems are limited by the bandwidth (or more properly the slew rate) of their elements. And the duality is precise. Similarly the values are always approximate. Analog has intrinsic noise, even if somehow the underlying implementation is perfect, and all cross coupling can be removed. Digital is limited in a range of manners, including underflow, but more importantly numerical artefacts that can lead to dramatic loss of precision in careless implementations of algorithms. Analog systems will clip, or hit the noise floor. A real time DSP based control system still needs to interface to the analog world, and its inputs are no better than that.
General purpose digital (ie a computer, rather just DSP) can be used in totally different ways. You could be using a symbolic mathematics system and solving the problem analytically. There is no analog computer analogue.
Using analog computers for off-line problem solving is clearly limited. The mathematics of the analog remains limited by the abilities of the underlying implementation. Bandwidth limits remain. But in the digital world you also have similar issues. The modern engineer doesn’t plug up an analog computer, he fires up Matlab. If he doesn’t understand the numerical problems in the choice of things like the granularity of the sample space used, the answers will be just as bad as a bandwidth limited analog system. But there are plenty of other ways you can go wrong and get garbage results with Matlab. And the underlying mathematics often have curious similarities with the limitations of analog computers. There is no substitute for a fundamental grasp of the scope and limits of your tools.
If you look at analog computers it remains astounding how much you can do with very few active electronic components. Heck they used to be made with tubes. A hundred tubes and you could solve significant problems. You couldn’t even make a start on any sort of digital computing element.
But times change. We can put a billion transistors on a wafer at commodity prices. We can make a large scale FPGA that can be custom coded to produce mind melting dedicated DSP speeds. Analog computers are a historical curiosity. A cool one. But no more than that.
There are challenges in all situations, But even in modern vector instruction sets like AVX you have to deal with loss of precision due to rounding.
Round to nearest, toward −∞, toward +∞, or truncating can lead to even more issues than trying to physically model.
These cumulative errors can pose serious barriers when dealing with exponential functions especially in fields that have to deal with non-Euclidian problems.
There have been a few issues blocking development of analog solutions for these needs. One I experienced at a pervious job is that companies that create compilers to produce analog ASICs seem to have an issue where large traditional companies buy them for their IP then drop their products. Another issue is a lack of experience with analog computing techniques by EEs.
Oddly enough this latter issue may be less of a barrier moving forward due to the Music industry.
Analog synthesizers are for the most part analog computers and the rise of the Eurorack modular format has resulted in an boom of people who are both creating and figuring out how to use these technologies. There are mow 1000’s companies working on new modules that do everything from simply sum to complex chaos calculations.
As an example.
Oddly enough I had to resort to an analog computer model to figure out why I was unable to figure out a method to calculate gravity fall off at a massive scale. While I did not produce a device that would have produced numbers that would have matched scientific rigors on a breadboard I was able to detect aliasing with 256 bit vector math that demonstrated the very real limitations of current general purpose computing.
Originally I looked at paying for the solution to be computed but the calculations would have taken years to complete on my hardware and the cost to use a grid were prohibitive.
Full disclosure I found an interesting method that had been proven by hand in the early 1970’s and was abandoned due to the lack of computing power. I was hopeful that modern advances would have removed that barrier but was ignorant about the limitations of accuracy on floating point numbers that approached zero.
In my use case I basically used them in a similar fashion to how an interferometer works. By comparing the AVX vector results and the analog results by summing the resulting waveforms glitches and aliasing were detectable with a simple mask on a MSO. As the calculations were in 4D it would have been a far larger task to try and detect those anomalies.
Analog computing will never be a replacement for general purpose digital computing but it is a very viable solution for some needs. I hope that the industry does invest time and resources into developing them and with digital controls it should be possible to actually improve accuracy in these use cases.
Like many things in life this is probably bet defined not as good or bad but as different. It is the use case that drives the value.
Actually Intel’s latest Xeon CPUs have 7.2 billion transistors on one chip. There are about 700 Xeon dies per wafer, so there are 5 trillion transistors per wafer.
Global semiconductor sales in 2016 were $335 billion. One high-end digital semiconductor fabrication plant can cost up to $14 billion. Annual semiconductor R&D is about $56 billion.
That illustrates the ongoing titanic investment in digital semiconductor technology. This is what analog technology is up against. This doesn’t mean there’s no place for analog, but it is currently dust on the scales.
While the digital may take over at a high level, it’s certainly possible that analog at a lower level is providing some advantages to some processes over digital.
Gap junctions could be an example where nature preferred an analog mechanism for some reason.
Yeah, true. I was being loose. The newest chip I have a naked die of is an Itanium 2, which is about 1 billion. Wafer is of course huge. I remember when we got out first 1 million transistor die. We thought that was pretty special.
Exactly.
I think you would be very brave to uniquely ascribe one of other mode of operation to the brain. Evolution tends to not stick to textbook definitions of much at all. Given something as simple as changing levels of a given neurotransmitter across the entire brain can change its operation in fundamental ways, a pure digital operation is unlikely. Given there is no master clock, operation is self-timed, but we also know that some functions are modulated by rate of fire. There are clearly components of operation that you can’t sheet home to any form of purely digital logic. Anything involving thresholds is a start.
The very terms digital and analog are already grossly misused, and to try to shoe-horn the brain’s operation into one or other of these camps just stretches things further.
Digital has become a synonym for discrete arithmetic. Usually with the only values being one and zero, and thus some aspects of the operation subject to Boolean logic.
Analog has become a synonym for continuous arithmetic. Despite initially being a contraction of “analogue”.
We have very good reason to believe that a significant part of the higher functioning of the brain is very much as an analogue computer. But doing do with continuous logic is another matter.
The simplest analog computer I worked on was the Link Ant-8. It was a pneumatic computer that used a one gallon glass jar for an accumulator. A vacuum pump evacuated the jar at a constant rate. The inputs to the jar were variable orifices bleeding in air at rates representing aerodynamic functions and the outputs were vacuum operated instruments. It was a pump, a jar and a bunch of tubing.
The most complex, and probably the largest of the analog computers, was the F100 flight simulator in 1956. Some of the components seem medieval compared to current technology:
The square card resolver gave accurate sine and cosine outputs. The card, mounted on the shaft of a selsyn, was wrapped with bare resistive wire. There were four static contacts positioned 90 degrees apart. One pair gave the sine value and the other pair the cosine of the position of the selsyn.
Metal film resistors were used for accurate summing nodes. To get .01% accuracy they were hand cut on a lathe.
Ganged potentiometers gave linear, log and sine functions. Also a ganged pair attached to a servo gave square and square root functions.
The electrical signals were 400Hz ac to avoid dc offset problems. All vacuum tubes were encased in grounded metal shields to avoid the crosstalk mentioned in posts above.
The computer was a room full of equipment racks. The output was an F100 cockpit with all it’s instruments inside of a 20 ft spherical projection screen and another small room for the instructor to monitor the pilot and set up flight situations.
It was analog in and analog out. The processing was massively parallel. It was prone to the same instabilities as the aircraft. If you hit the afterburner and pulled up into a vertical climb you would eventually run out of atmosphere and the aircraft would begin to fall backwards, vertically. The computer was not set up for backwards flight. It became robot theater with all of the servos in oscillation searching for a stable point. The only way out was to turn the thing off and start over. Good clean fun for bored Airmen on the night shift.
With modern planer semiconductor technology there is no reason to revive analog computers. High speed serial processing has won. However, in a couple of hundred years problems and solutions may evolve. Massive parallel processing of analog signals in real time may find it’s place once again.
Crane
Exactly correct. Nobody knows how the brain actually works, how memory is stored, or whether it is analog, digital, quantum, a mix of those or something else entirely.
In the early 20th century, educators often likened the brain to a complex clockwork machine. By the mid-20th century it was likened to a complex telephone switchboard. Then when computers became commonly used it was (and still is) likened to a complex computer.
We know the brain is not like a machine or a switchboard – those were simply the most complex things available for a comparison in those eras. It is certainly less like a computer than is often stated. The brain is definitely not analogous to a digital computer where each transistor is a neuron or a synapse. It works nothing like any digital or analog computer ever constructed: https://www.quora.com/What-is-the-clock-speed-equivalent-of-the-human-brain
A good example of how little is known about the brain can be seen from neurological studies of the roundworm C. elegans. Its brain has only 302 neurons and only 7,000 connections between all synapses. Scientists have rigorously mapped the entire “connectome” of C. elegans, yet still have no idea how its memory is stored nor the molecular basis for inherited behavior.
The human brain has 100 billion neurons, and estimates of synapses range from 100 trillion to 1,000 trillion. The number of possible neuronal pathways is unknown but sometimes numbers of 10^40 or higher are speculated. To build a connectome of the human brain (if it were possible) might take 2,000 petabytes just to store it, and this would still not reveal how it works: Can we build a complete wiring diagram of the human brain?
To see how different the human brain and computers are, consider estimates of synapse count: 100 trillion to 1,000 trillion. By contrast the world’s upcoming fastest supercomputer, IBM’s Summit, will have 9,200 IBM Power9 CPUs, and 27,600 nVidia Volta GPUs. Each Power9 CPU has 8 billion transistors. The transistor budget of Volta is unknown but the Pascal GPU already has 15 billion. Using these numbers the Summit computer may have about 382 trillion transistors. So Summit will be beyond some estimates of human brain synapse count, yet there is no expectation that it will suddenly become intelligent. There is a profound difference between the physical function of the brain and a computer. They are far more different than is commonly discussed.
Just to expand on this: before scientists can try to model/simulate a worm brain, they need to be able to model just one single neuron properly.
A single neuron (some types) is now considered a complex neural network all by itself due to dendrites having many firing points with memory and non-linear computations being performed.
If you read/follow the research, it’s an ever increasing amount of complexity of computation being discovered that makes any attempted simulation at this point probably way too early.