It lives! I’m unimpressed with the continuity argument also. As for modeling, I don’t think that would prove anything. There is structural modeling, and there is functional modeling. I don’t see how weather can be modeled structurally. The brain might be (not that we understand it well enough to do a good job.)
But I also don’t think the question of whether the brain is analog or digital has much meaning - it is clear it is both. Neurons on the other hand …
Back to the whole “what’s the future of the computer” thing, wouldn’t it make (some) sense that, at an engineering level, once they start hitting roadblocks with digital circuitry it would be a somewhat logical progression to begin experimenting with adding analog components onto an existing digital system and only using the digital system for digitally accomplishable tasks and as a controller for the new analog components?
I think someone on the first page used an example about how a digital device can’t determine if a given number is rational but saw a analog device that can (not sure of the veracity of that, but I’ll go with it since it sound plausible). So you enter your number into the computer and hit the “findIsRational” button, the computer takes in the number and passes it to some analog device sticking off the motherboard which does its thing and then returns a simple boolean back to the digital part of the computer who then prints it for you. Obviously this sort of boolean computing is a really simple example, but you could probably accomplish some pretty amazing stuff if you were creative with how the device structures its return to the digital part of the computer.
You should be.
The argument, though, doesn’t prove that analog is digital. It proves that, philosophically, there’s not much difference. Digital signals carry finite information, and analog signals carry finite information. Moreover, that information can be transformed from one representation to another and it’s the same fn information.
Analog and digital aren’t philosophical concepts. They’re labels for a couple specific technologies we often use. It’s easy to have other technologies, like the aforementioned modem and ethernet, that don’t really fit either of those labels. And there’s nothing mystical about it.
I think when people talk philosophically about analog vs digital, they’re really trying to point out the difference between functions which are smooth and stable and functions which are highly dependent on starting conditions and have constrained behavior. But, in fact, a digital program can easily be smooth and well-behaved and an analog circuit can easily be chaotic.
As I mentioned System on Chip devices commonly include analog blocks. The rational number finder is nonsense, but there are plenty of things analog does better than digital - like radio transmission in your cellphone, for example. Nothing new about that at all.
The think that impressed me the least was that he thought he had to find a complex counterargument against infinite information on an analog symbol - especially when he mentioned information theory.
I think I covered all this already. Digital computing has win because it is more reliable and more scalable - and much easier to design and program.
The philosophical meaning of analog and digital seems kind of pointless. What we are doing here is looking at a design and wondering how to classify it. I don’t think the philosophers have ever read a netlist, and so don’t really get what digital is.
Hmmm. Not quite. Excitation vs. inhibition at the cellular level is moderated via specialized neurotransmitters (excitators and inhibitors) that either summate or extinguish summation. The end result, admittedly, is that the neuron either fires or it doesn’t. However, neural firing is a non-zero-sum proposition. It may take a sum of three to ten excitations to cause a neuron to fire, but only one inhibition to cancel that firing.
As I am a psychologist/musician and not a computer scientist, I’m not at all sure what the upshot of all this is re. the analog vs. digital question. I came to this thread somewhat by accident. You see, I was telling a colleague that–compared with an analog recording on vinyl–listening to an audio CD was like listening to a symphony through a screen door. As we were in the process of sequencing/processing some vocal tracks at the time, I got to wondering whether such a thing as a “modern analog computer” even exists. As of this writing, I guess the jury’s still out on that one.
Welcome, McKeever, to SD. Note that this thread breathed its last, of natural causes, eight years ago, and posters may have gone on to other things.
But your animation of this zombie is well done, and interesting as hell, to me, at least, but be warned, if any convos on its specifics get too interesting, a hijack will probably be gently suggested and a brand new thread can/will be born.
(Signed)
Also a musician, and science fanboy
I clicked to remind that while axon firing may be all-or-nothing, information may be encoded by the frequency and even phase (or exact timing) of the firings. I see that lazybratsche and RaftPeople Ninjaed me by 8 years.
BTW, the view that the the neuron body does only a simplistic summation of dendritic inputs is severely challenged in this interesting paper by Hameroff-Penrose. (Free registration required.)
I believe the professor in the OP was referring to three advantages of analog computers:
- Analog computers can generalize between two decision points
- Analog computers are fast
- Analog computers fail gracefully
Modern manufacturing technologies favor digital computers, but the world is still analog.
Crane
Here’s an analog computer from the late 30’s with a specific application. One of its advantages was that the input and output was continuous. The various input devices simply had to be kept on target and the computer kept the solution output on target.
Note that the output of an analogue computer does not have to be a continuous function of the inputs (eg imagine a simple voltage limiter where the output voltage is equal to the input voltage up to some threshold, and is equal to zero when the input is greater). Also a correctly programmed digital computer can certainly compute continuous functions and fail gracefully.
I just want to add that that quantum mechanics shows that some things about reality are not analog. They are inherently quantized.
At least, that was my takeaway. It’s been a long time since I’ve looked at that.
A computer today can already do about 10^17 flops (100 petaflops) average and actually exceeds this in peak performance. By next year a US computer will double this (200 petaflops): Sunway TaihuLight - Wikipedia
The Cray XC50 can supposedly scale to 500 petaflops (five times 10^17 flops): New Cray Unveils XC50 Supercomputer Can Scale to 500 Petaflops | CdrInfo.com
Extremely high floating point performance is not limited to large supercomputers. GPUs can already do 100 teraflops in a hand-held device: AMD shows Vega Cube with 100 TFLOPs
The OP question was about the future of general-purpose computing, IOW what current trends indicate about analog vs digital computing. The digital trends are obvious from this graph (note y-axis is exponential, otherwise the graph would be nearly straight up due to rapid progress): TOP500 - Wikipedia
There are some promising analog optical “computing” methods, e.g, Optalysis: https://www.youtube.com/watch?v=OZenWL44jS4
However these aren’t remotely general purpose and there are questions about achievable resolution. They can handle a few mathematical algorithms very quickly. However they are best described as a highly specialized analog “co-co-processor”. IOW the CPU ships certain parallelizable tasks to the GPU, then the GPU ships a further small subset of those to an optical processor. While potentially useful for a few narrow tasks, this is not the general future of computing: https://www.quora.com/Are-the-Optalysys-claims-of-exascale-optical-computers-believable
The OP obviously meant relatively general-purpose computing, whether by CPU or GPU. He did not mean some tiny element of a signal chain for one narrow algorithm. Otherwise the contest would be between purely digital ASIC (Application Specific IC) methods vs analog. An ASIC will always be faster for a narrow case such as video encoding than a general-purpose CPU or GPU. Analog methods are no longer competitive for this area.
In specialized fields like RF signal processing, analog methods have already given way to digital, even into the high UHF spectrum. Five years ago it was already possible to simultaneously digitize all channels on a cable TV coax, and implement multiple purely software defined receivers. Unlike analog methods, these can have perfect, mathematically pure demodulation and filters: https://www.youtube.com/watch?v=sspSh7o3x_A
Today almost everyone reading this already owns a software-defined digital radio (Software-defined radio - Wikipedia) which uses purely digital signal processing – not analog. It’s called a 4G cell phone.
So, I used to think this as well. It’s wrong. The reason is pretty simple, and it goes all the way back to the beginning of analog computation.
Each analog circuit you build is not infinitely precise. In reality, you get an analog output + noise.
Let’s suppose that, in the analog world, you get 1.0 volts of valid signal and 0.1 volts of noise added, randomly up or down. So right off the bat, your precision is only good to 10%. Your answer could slew up or down by 10%.
You can’t help the noise - any wire gets currents induced from nearby wires. Even if you use photonics, heat can stimulate photon detectors into false readings.
You do another computation using the same grade of circuitry. But since the start computation is off by 10%, the noise cascades - now it could be off, up or down, by 21%. And you’ve done just 2 calculations in a row!
As you can see, once you do 10 calculations in a row, your signal is garbage. For many purposes, even being off by 10% on just 1 calculation is bad.
Digital systems are analog circuits at the hardware level. But instead of producing an analog output, they produce a signal that is thresholded. Common logic levels today are 0 volts to 0.8 volts for a “low” logic state and 2 volts to 5 volts for a “high” logic state. There is an analog circuit in there - at a hardware level, what do you think a NAND gate or flip flop is? It’s just designed that the next gate in the chain only cares about the 1 or 0 state and not the exact voltage.
Also note that it is perfectly acceptable for the signal to be off by 10%, like I mentioned above. More, even. 10% of 3.3 volts is 0.33 volts, and you can see it actually tolerates about 25% inaccuracy.
So digital circuits let you build a system that is off by 25% and you still get the correct answers. This lets you use much cheaper and more error prone analog circuits, but they are used in a large digital system that gangs billions of them. We have analog computers, today. They just produce all their outputs as discrete values.
Oh, by the way, the brain is a digital computer just like this. Those who think it’s analog are ignorant of signal processing theories and/or detailed knowledge of neuroscience.
A couple of elaborations on the above. If you have an analog computer that is only accurate to 10%, you can replace it with a 4-bit digital computer and you will achieve the same accuracy. Digital computers may “only think in absolutes”, but in reality, a signal from 0 to 2^4 is more accurate than the analog computer.
8-bit digital computers are even better, accurate to 1 part in 256. This is why even in the early 1960s, there was widespread replacement of digital computers with analog. It was immensely better from the very start. This is also why the Apollo lander used a digital flight computer. You get more accuracy, and you can get it with smaller and cheaper analog circuits (assembled into a digital computer), which makes the computer smaller. Also, since you can load from memory different states to reuse the same analog circuits, it also makes the computer smaller.
The reason the brain is digital is that while it does use analog comparators - much like the chip that you are using to view this web page - each output signal from a processing stage is still all or nothing. The pulse timings are not infinitely precise, there’s a noise band, so the human brain is more accurately described as a hybrid solution, where it does produce noisy outputs but the noise at each stage is being constrained by use of digital techniques. Nevertheless, you can quite accurately mimic smaller animal brains with digital computers, and with this deep learning stuff, it turns out you can quite easily build a program that runs on digital computers that has many of the same qualities as brains have. The 1 or 0 limitation doesn’t mean anything, neural net algorithms use as their base for computation 32-bit numbers that are signals that can vary over an enormous range.
I’ll add a few points. Analog isn’t quite that bad, but it has a number of very specific limitations.
Digital computers are general purpose. A big part of the reason for using a digital computer on the Apollo missions was that the computer was capable of a very wide range of tasks. Once you have the basic hardware it is just a matter of choosing which programs to run. Exactly the same hardware could be running the navigation software or listening to the radar and performing real time tracking. An analog computer would need tearing down and reconfiguring.
Further, analog computers are restricted to a range of problems. They provide analogues of the main mathematical building blocks. Integrators, differentiators, adders, multipliers. You can lash up a system that calculates a complex system pretty easily, and use it to gain an understanding of systems that would be intractable mathematically. In particular control systems, where stability analysis is critical to the success. But also modelling the behaviour of complex system. One would note that prior to Apollo, the mission simulation teams used analog computers to provide real time simulated inputs for training and tests. A great deal of the Apollo system were probably tested in much the same way, although the mission control work had progresses to using digital computers.
Cascading errors is not peculiar to analog computers, but it manifests itself in different ways in digital systems. Quantisation errors in the form of rounding and truncation of results can quickly render a poorly thought out digital system useless. (It bothers me deeply that so many students don’t get taught any of this anymore.) But even the most carefully designed, near perfect, analog system is limited by thermal noise. Also analog computers have intrinsic limits - things like slew rate of the op-amps. Any system has to be analysed to ensure none of these limits is exceeded, otherwise the results are meaningless. Then again, you can max out a digital representation. And digital systems performing real time control are quantised in time, which is mathematically identical to bandwidth limiting of an analog system.
Overall you can’t really win, but technology has taken us down the digital route a a very long way.
Francis, analog systems have the exact same truncation problem - any op amp cascade you build is going to clip at the supply voltage. You have a *far greater range of magnitude generally with digital floating point. We’re talking performance differences of billions of times. In truth, no, I say you’re essentially wrong. Digital systems are generally thousands to billions of times better than analog, depending. It’s pretty much never a contest. The only reason for analog at all is that time resolution you mentioned. For instance, on a project I did recently, I needed to filter out analog noise cleanly and perfectly, with a so called “brick wall” filter. Some of the noise was within a few hz of the passband. The analog filter starts to roll off at a lame 150 hz, and doesn’t really get to zero (below ADC resolution) at 5 khz. Pretty awful. With 3 FIR stages and relatively modest computational requirements (using decimation), I was able to get to a brick wall rolloff, where the signal magnitude was zero at 5 Hz. Yes, I could have had the electrical engineer on the project install more and better analog filters, but cascaded analog filters cause problems of their own, and I would have still needed massive digital filters to reach the needed performance. But the advantage of analog is it does filter anything above 5 khz, including Ghz level signals in theory, and I was only using a sampling rate of 10 khz. So it’s very much quantized in time like you are saying, while the time constant for the analog system was much quicker.
- by zero, I mean it was below the threshold for one bit.
Not the exact same truncation problem, as an example an analog computer is superior for solving, analyzing, and manipulating differential equations.
Also you run into rounding issues with floating point in digital systems particularly when dealing with values on the edges of the available number space with digital computers.
If they are superior why isn’t UCLA still using its analog differential analyzer?
UCLA DA used in "When Worlds Collide": http://www.criticalcommons.org/Members/ccManager/clips/differential-analyzer-cameo-in-when-worlds-collideHere is creator Vannevar Bush with one at MIT: http://xirdalium.net/wp-content/uploads/bush_analyzer.png
Why aren’t they using any other 50 year old system.
But computing is not completely driven by need, often it is similar to fads and other times niche uses are dumped because of a lack of a market compared to the expense. There are several mathematical functions that have been ignored for 40 years due to the limitations of digital computers in General Relativity as an example. It was not because they were a dead end, but because they were difficult to accomplish on digital computers people just found other areas to work on.
Here is a cite from this century of a more modern effort.
[
](Evaluation of an Analog Accelerator for Linear Algebra | IEEE Conference Publication | IEEE Xplore)
In some rare cases people can get by with or deal with digitally emulated analog computers, but if the products were available they would absolutely leverage an analog computer.