Do you believe in a soul?

It’s proving difficult to find accurate estimates of the number of neurons and their synapses in the brain of a cockroach. I might have to substitute the fruit fly, aka D. melanogaster, which we know has around 100,000 neurons in its brain…

Genetic algorithm. There is a set of heuristics that consist of adjusting various coefficients, based on experience or learning, to most effectively solve problems. The first example of this was Samuel’s checkers program started over 45 years ago. This sort of thing has its use, but is not going to lead to machine intelligence. Both fuzzy logic, GAs, simulated annealing, and a host of other techniques fall into this class.

Your objection is called the Chinese Room Argument. Basically this states that someone in a room with a book that lets him answer any question in Chinese by looking up a card given as input in the book, and copying the book’s response, would not truly understand Chinese, while passing a Turing test for it. First I don’t believe any such system is possible. But say there were several people in the room, one of whom answered questions from the book, but the other revised the book based on responses. Say there were several such people. If you go far enough, you might get to the point where the room understood Chinese, even if no person in it did. Our neurons are not intelligent, but the collection of neurons is.

This happens all the time. No person in Intel understands all facets of a microprocessor design (trust me on this) but the organization as a whole does. Asking which neuron or signal is the seat of intelligence makes no more sense than asking which gate or wire runs a program.

Are you familiar with the experiment that showed the movement of a hand was set up before a person decided to move the hand? Our consciousness thinks it is making decisions, but it seems to be reporting on decisions that have already been made below that level. I’m sure you’ve solved a problem without thinking about it. Something is going on down there. My program that understands programs may not have access to all the processes that are running, some of which may be dispatched, do a job, and report back.

It’s frustrating to me because I’m arguing both sides, since I think the mind-body problem is real but resolvable, and I’m stuck between someone who thinks it’s real and unresolvable, and someone who doesn’t think a problem exists at all. If I try to argue in a way that one of you understands, I lose the other one. Maybe that’s a sign that I’m inconsistant and my position doesn’t actually hold together.

Anyway, I’ll think about the rest of what you wrote and see if I come up with a response for you. It’ll be probably tonight, but there’s always the chance I’ll get distracted by real life and forget. If I let it go too long, feel free to email me or bump the thread (since I’m subscribed) and I’ll sit down and type something.

The room might understand Chinese, but wouldn’t be aware it did - or it one of the people in the room understood that the whole room understood Chinese, then you’ve got the ‘center’ I’m looking for. Does that center exist in our brains? If not, then how do we know we understand things?

I am not familiar with the experiment. It seems to argue against free will. Either that, or whatever made the decision below the conscious level was capable of turning will into action.

Oh. Okey-dokey then. I guess I just got jealous because you said you agreed with him. Two-timer.

How would you know that except via testimony from the room - which, by definition, validly answers any question in Chinese. It passes the Turing test, so is by definition intelligent. You’d have to accept questions on self awareness as valid questions, right? Both the room and our brains are composed of discrete and interconnected components. If you say that our brain is overlaid with something else, a soul, why can’t the room be? Why couldn’t God or a genie or whatever add the soul to the room just like it does for a baby?

I think a lot of us visualize our egos as little men or women sitting at some sort of control panel in our heads, looking at screens showing images from our eyes, and pushing buttons to make our bodies work. The experiment shows that there is no little man (figurative, of course) but that something else is pushing the buttons, and only later telling our conscious mind about it - which, because it has this image of itself, convinces itself it pushed the button.

This has taken a while, and I’ve more questions now than answers now. I underestimated the level of information available about insect brains. Unlike C. elegans, which has had every neuron mapped out, the Drosophila brain, while explored to a marvelous degree, lacks the extensive description of connectivity and combinatorics one can glean from what is known of the roundworm. That said, no one has created a realistic simulation of a roundworm, much less a fly. An article I read in print some years ago said such a feat would be impossible with contemporary computing power. I’m not sure if Moore’s law has changed that, and people just haven’t tried.

Just to give an idea about the level of complexity we’re talking about, though, compare a schematic of the fruit fly protocerebral bridge with a typical neural network, consisting of input, “hidden”, and output layers, interconnected by some number of purportedly synapse-like junctions. Remember, the average synapse might secrete a particular neurotransmitter, in differing amounts, in bursts of different frequency, the concentration of which will lead to autostimulation via homoreceptors on the pre-synaptic neuron to modulate the “volume” of this signaling, while there is also a differential response to stimulation of the post-synaptic heteroreceptors depending on also on said “volume” (being a function of many variables). A very simple logic gate for multiple inputs might be a XOR or XNOR gate (see thetruth table), to simulate interconnected neurons, but the output for such a gate is one bit of information. Not very interesting. How many bits are needed to simulate all the possible outputs of even a single neuron? And what are the true logical operations performed on the inputs to produce an output? To be honest, I don’t know how many registers would be required. Maybe we could focus just on the number of connections. Keep in mind an individual neuron in the human brain (of which there are roughly 100 billion) may have upwards of 10000 synapses each, connecting to perhaps hundreds or thousands of other individual neurons.

Now I’m just throwing out an idea for a crude model of a fly’s brain. Generally, these neural networks are modeled in software. Maybe there’s physical hardware that can also perform these connections, but typically the A.I. is modeled on a distributed network of computers, a cluster of some size. We know the brain of a fly has about 100000 neurons. That’s a lot smaller than the human brain, but how does it compare with our attempts at AI? Again, totally underestimating what must be the real connectivity of fruitfly neurons, lets just say each neuron can connect to, at most, 100 other neurons. And let’s divide the fly’s brain up into something like a typical artificial neural network, with just three layers, the input, hidden, and output, ignoring completely any error-correction algorithms or weighting of inputs and outputs which are required to produce sensible activity. With 100000 neurons to work with, and the simplest possible arrangement of three layers with an equal number of neurons in each layer, with a minimum of 100 connections (say 92 to the next layer, and 8 to a nearest neighbor in a square array within the layer, ignoring the boundary) that’s over 3.3 million synapses per layer. That’s just possible paths for information to flow, not taking at all into account how many bits get transmitted via each path, how those bits are processed, or how that activity is organized. Has anyone tried to model a brain at this level of complexity? Could a modern supercomputer, composed as it is of hundreds, or even thousands of individual processors, handle that kind of load? Could it simulate a real fruit fly?

It seems with the several million paths information could flow alone, the number of possible operations, and the processing power needed to coordinate all those operatioins could easily explode. Just adding a byte to each synapse for output to the next layer (to make something even remotely like a realistic synapse) inflates the number sixteen-fold, ignoring the program that is needed to process the input to produce the output. What kind of logic gate processes 100 one-byte inputs to yield a sensible output, and to how many neurons downstream? I’ve no idea. I guessing that’s orders of magnitude more complexity. I can’t even imagine what the numbers must look like for the human brain. Just taking the 100 billion neurons and adding a modest 1000 synapses per neuron gives 100 trillion possible paths of information. In terms of combinatorics, what kinds of values are we dealing with? What kind of program would have to be written to simulate it? What kind of computer would be needed to run it?

I wouldn’t know that, except that you described it to me as such. I agree it’s simple (conceptually) to come up with a system that will say it’s self-aware if it’s not. So, do you believe in consciousness? Because it sounds like you don’t trust your brain telling you you’re conscious any more than the theoretical room full of people who may/may not understand Chinese.

So do you not believe in free will?

I guess I can accept those - you can not have a soul if you don’t have consciousness or free will. I also can see how a very generalized system with pseudo-centralization might stumble across the question of self-awareness whereas a real-life computer won’t.

But the experience of consciousness is persistent. It leads me to think there’s no ‘me’ - just random squirts of energy, each of whch comes across patterns previously stored in the matter of the brain and behaves as though it were there all along.

Then death is nothing, because consciousness is nothing. You die every time a synapse is done firing.