View Single Post
  #334  
Old 12-30-2017, 05:20 AM
SamuelA is online now
Guest
 
Join Date: Feb 2017
Posts: 3,708
For the rest of the audience, let me explain it is simpler details what I mean.

Let's suppose you have an analog computing circuit that adds together voltages 1 and voltage 2 to produce the output.

Common sense would say if it's an analog circuit, then if voltage 1 is 1.000000000000001 volts and voltage 2 is 1.000000001 volts, the output would be 2.000000001.

And in fact this system would be infinitely precise. Any teensy change in the inputs will lead to the same teensy change present on the outputs.

You wouldn't be able to replace this circuit with a digital system. A digital system uses discrete values. Let's say that for technical reasons, voltages 1 and voltages 2 range from 0 to 3 volts. And you use an 8-bit digital adder as well as 8-bit ADCs and DACs. This means what you did was discretize into 255 buckets this signal. So the digital system would thus be only accurate to 3/255 = 0.0118 volts.

When you add noise into the mix, though, things get interesting. Suppose the adding circuit itself injects 10% of random voltage leakage, peak to peak. This leakage is because nearby circuits (it's packed very, very tightly) are actually inducing voltages into this circuit inadvertently some of the time. So the circuit is only accurate to +- 0.15 volts, and the digital equivalent is more than 10 times better.

The brain uses both analog voltages and analog timing pulses. Both are, as it turns out in modern experiments, horrifically noisy.

Let's suppose you wanted to do a lot less than understand consciousness, visual processing, or even what a given functional brain region was doing. All you wanted to do was copy the function of a single synapse. So you build a very teensy computer chip, you paint the electrodes with growth factors, and you, for the sake of argument, have an electrically equivalent connection to the input and output axons for a signal synapse. You can both observe the inputs and outputs, and once you are confident in your model, remove the synapse and replace it.

Say it's a simple one. There's 10 input signals, and 1 output. All I/O are all or nothing (1 or 0), but they do happen at exact times. Analog timings, actually...

So again, if you use a digital system, it has a discrete clock. It might run at 1 mhz. Meaning you cannot subdivide time any smaller than 1 microsecond. But... for the exact same argument as above, due to noise, you only need to do somewhere between 2 and 10 times better than the analog system to have a digital replacement.

Similarly, the brain does some really tricky stuff, possibly, at synapses. But whatever tricky stuff is heavily contaminated by noise. So in reality you again don't need to do that well. Newer research indicates you might need a sequence predictor in your model, for example. But it need not be a particularly high resolution one.

So if you can replace 1 synapse perfectly, in theory, though it is obviously isn't physically possible to do with a living brain because biology is too fragile and unreliable, you could in theory replace 10% of them. Or 50%. Or 100%. You would have to also duplicate the rules that cause new synapses to form, duplicate the update rules, duplicate other analog signals the brain uses as well. It would be no means be an easy task.

However, this argument is 'standing on the shoulders' of many giants who have perfected their signal processing theories over decades. It's bulletproof. There are no circumstances under which this hypothetical brain copying would not function in the real world. There is nothing the brain could be doing save actual supernatural magic that can't be copied by a discrete digital system.

Last edited by SamuelA; 12-30-2017 at 05:21 AM.