What exactly are modern analog computers and how are they better?

A professor told me that the future of computers was with analog, and that they would be capable of greater power such as more sophisticated artificial intelligence. An attempt to review what the benefits of a modern analog computer would be made things worse in my head.

SO what exactly are modern analog computers (not astrolabes but CPU electric-run) and why are they better? What do they do that is better and how could they even work?

[del]Sounds like a contradiction to me. The most recent analog computer I can think of is a slide rule.[/del]

Then I looked it up on Wikipedia: Electronic analog computers

They are not better. They are worse. Digital signals are unambiguous, in the absence of severe noise. Even mild to moderate noise is enough to skew an analog system, and when you’re doing multiple billions of calculations in your program, those errors are cumulative. Error correction of the sort used in digital systems is not possible, compounding the problem.

Well, WAG guess here.

The human brain is more analog than digital (maybe) and certainly noisy (hush Bob, I dont care about the army of the 12 monkeys! I am trying to type here), so one could make the arguement that analog might be required for artificial intelligence.

You can simulate analog with digital, but that might still cause problems.

And certain calculations, even done analog, are probably IMO good enough when done analog even though in theory they might not be a precise as digital. A crappy answer with somewhat uncertain precision/accuracy is still better than one where the accuracy/precision is well known but you never get the answer.

Like most professors, this guy has his head in the clouds and is completely disconnected from reality.

We’ve spent the past 60 years inventing and improving the digital computer. Digital signals have discrete values which can be communicated without degradation and checked for errors.

Digital data can be stored in an unambiguous way, with a known level of precision. The precision of an analog system depends upon its components, and there’s no guarantee that two uncalibrated systems will treat a given analog value exactly the same way.

Analog calculations are generally one-way, involving the summing of currents and similar operations. They require massive amounts of hardware to implement complex operations which on digital systems would be issues for software or the OS to manage.

But if you want to see a cool, modern analog computer, this guy build an entire differential analyzer out of Meccano.

He wasn’t mistaking it for Quantum computing by any chance?

A professor of what?

Digital computing is moe unambiguous, which is why we use it. However, analog computing has advantages precisely because it is ambiguous. That’s its basic advantage, and allows it to do things digital systems can’t. There’s a non-trivial argument that you can’t fundamentally make an intelligent digital AI, because you wind up destroying the thing you’re trying to make; it can’t think with just on/off states. (the theory is more complex and deeper than I present here, but it’s not easily dismissed).

Could you point to a presentation of this theory that does justice to its complexity and depth?

I’m skeptical that anyone has built an analogue computer that can “do things”, in any precise functional sense, that a digital computer cannot.

Back in the day optical computing was all the rage in theory.

Take an image, many pixels by many pixels. Do a 2-D fourier transform on it. Thats computationally intensive as hell.

A lens “automatically” does that to a whole image at the speed of light. So, in theory, its massively parallel and about as FAST as you can get. And it doesnt generate any heat in the process.

I guess the problem is most computations are not easily/efficiently “transformable” to a 2D fourier transform problem in order to be solved. An then there is the data I/O problem. But again in theory its da bomb.

I would say that such a computer is not da bomb even in theory if most computations cannot be efficiently formulated in terms of the computer’s functions. At best it is da bomb in a very incomplete theory.

That’s basically the principle.

A neuron, for example, doesn’t just fire in sequence. They fire in weird patterns, according to their own internal logic and connections, and their nature can change over time. Digital calculations are always done in sequence and each is entirely unconnected to the next.

Now, the problem (among others) here is that, ironically, Digital can’t handle ambiguity. AN analog system can be self-correcting. A digital one will rapidly go out of whack, because it can never check its own work properly. It makes a mistake - maybe a tiny one, or it’s even not a mistake at all, but just bad data. But that knocks the next calculation out, and the next, and the next. Bam, the system breaks.

Analog systems can take in the whole data set, and errors are adjusted for automatically. Missing data can be assumed. You can adjust errors on digital systems, but then you have to have a whole 'nother system checking for them, and then another to check that, and another. A closely related function is massive parrallelism: analog systems are inherently parrallel-function designs.

Basically, digital is extremely precise but limited in “robustness”. Analog is unlimited in robustness but limited in precision.

http://www.mikiko.net/library/weekly/1998articles/aa053198.htm - diasgrees but explains the basic idea. I’m having trouble finding better sources right now because of too much garbage on google, and some older ones are dead links.

smiling bandit, where are you getting these notions about how digital computers work? Where ever you are getting them, I suggest finding another source. You have been severely misinformed.

Start by learning about error correcting codes.

Are you here to contribute to the discusion or nitpick?

How many qualifiers does this place require before somebody will let something “slide”?

Geezus.

I am a walking biological 3 D AND temporial transform system. Get back to me when you’ve got a digital equivalent of me :slight_smile:

Personally, I think Pushkin had the answer - should have been Quantum Computing.

Digital vs. analog. Not perhaps a computer but sound recording and reproduction may be an apt analogy. Digital slices up an analog waveform into smaller and smaller samples to “approximate” the information. Sampling rates and bit depth are factors. An analog recording system may record on tape or other media the actual waveform. Digital reproduction at high rates and depths get close to the original waveform/sound, it can also sound pretty sh**y and lifeless at low rates (crappy MP3s). Analog reproduction gets closer to realism but does come up with the crackles, hiss, and pops. Your favorite LP or tape may develop “noise” but still sounds like music. The CD, if damage/deteriorated, simply won’t play. I am aware that many / perhaps most sound reproduction systems may incorporate digital elements (switching amplifiers for example) but analog front to back is still viable.

I can see in a few cases where a system that will accept higher noise levels could be desirable to one that breaks down in spite of error correction algorithms.

Man, and I just read about some new chip that uses fuzzy logic or analog or something. They even created an example chip and its power consumption was much lower. Please, someone has to remember this announcement.

You can do fuzzy logic digitally. In fact, a digital computer can simulate any analog computation (to within a certain degree of accuracy.)

Yeah my post was way too hasty and now I cannot find the reference anymore. I’ve been searching blogs I usually visit and some tech sites but I cannot find it anymore.

Statements of “analog computers are not better” are misleading at best, if not flat out wrong, at least in my opinion. I should probably say “cite?” to these claims, but this is probably not productive.

The term analog computing is pretty vague and covers a lot of ground. I am not a computer scientist so it would be difficult for me to talk generally about the differences and advantages of analog computers. I can however talk about stuff I have done: in grad school I worked on a project to use spectral holography to do signal processing of range/Doppler lidar signals (here is a publication with the initial research). This is absolutely analog processing / computing using optical signals instead of more conventional electronics and it is orders of magnitude higher speed and with orders of magnitude more bandwidth than is possible with current digital technology. More generally, note that a simple lens preforms a 2-D Fourier transform of coherent fields at its focal plane. Take a pixellated image (say a 1000 x 1000 spatial light modulator) and a good laser and you can easily do a 2-D Fourier transform in less than a microsecond. This is equivalent to 10^18 analog multiplies/s. If you were using a standard FFT N log(N) calculation, this rate would be equivalent to 2 teraflops. This is only for a single lens. The problem with most optical processors is that they perform only very specific operations (Fourier Transforms, Convolution / Correlation / Filtering / Pattern Recognition, etc…). But they do it very well, much faster than a digital system could hope to.

Here is an excellent presentation (warning PDF) by a co-student (is this a word?) of mine on a squint compensated RF imaging array system capable of more than 10^17 flops of image processing. Find me a computer that can match it…

Are you thinking of this:
http://www.electronista.com/articles/09/02/08/rice.university.pcmos/ ?