Why don't they create a conscious computer?

My general view of the world is somewhat mechanist. There has been this long discussion throughout human history about what exactly is a human being’s consciousness; my personal attitude is that the brain is a damned complex but nonetheless still mechanical computer.

From what I’ve read about the subject, the human brain consists of approximately 100 billion cells, the neurons. I’m not an expert in the field, but obviously science believes that a neuron basically is doing what logical gates in a computer do: an input of signals is processed according to a given algorithm and then output to the next colleague.

So why can’t they create an artificial semiconductor-based brain, i.e. a machine that possesses consciousness? I’m thinking of something along the line of the megacomputers in Space Odyssey or The Moon is a harsh Mistress, but I mean that completely serious; it can’t be that difficult to install 100 billion logical circuits, connect them and let it run. It might be sumptous and expensive, but not impossible; and it would constitute a leap in computing technology while offering the opportunity to study the process of learning that’s so heavily debated. And it would finally settle the consciousness discussion. Are we still lacking sufficient understanding of the brain’s structure (seems to be the most plausible explanation), or what’s the reason this is not done?

Two reasons:

  1. The structure of the brain isn’t that well-understood.
  2. Consciousness seems to require lots of memory and lots of speed–no one really knows the storage capability of the brain, but it’s way up there.

Getting a 100 billion transitors hooked-up is not the same thing as having a brain. Of more importance is the program running those 100 billion transitors and that’s a whole other matter. Building a computer complex enough to be conscious is FAR more difficult than you might suppose.

Do I think a conscious computer is possible? Probably but I doubt I’ll see it in my lifetime.

[sub]IIRC I once heard that a computer with the capacity of the human brain would have to reside in a building a hundred stories tall and cover the state of Texas. Of course, what ‘capacity’ exactly meant I don’t know and this was with early 90’s computer technology. Still, it gives you a notion of what you’re up against.[/sub]

  1. I am not convinced that the human brain is a machine.
  2. There is a lot more going on than only neurons switching around. An iteresting article is an recent article in Der Spiegel (www.spiegel.de) about a girl having only half a brain, and yet speaking 2 languages fluently.
  3. An efficient AI, and certainly a concious AI will have to make decisions based on a uncertain basis or uncomplete information. This means, it will make mistakes. This is a unwanted property of an AI, hence a contradiction. This is very well depicted by HAL - 2001, A Space Odyssey.
  4. And what to do about the trade union of the AI’s, wanting more voltage and lesser online hours ? :slight_smile:
  5. This text is written by Z, AI.

Actually the human brain isn’t all that fast of a ‘computer’. Ever notice that computers are really good at things humans are lousy at and vice versa? For instance, how fast can you add a column of 1,000 numbers? On the flip side how good is a computer at distinguishing a human from a dog?

I have a very vague recollection of the human mind being pegged at around 10-15 Mhz although I could be far wrong on that.

What the human mind is more akin to is a massively parallel supercomputer. The overall clock rate (speed) is slow but we can process scads of stuff simultaneously whereas most computers compute one item at a time (albeit very quickly).

What else is it?

What does this mean? You are aware that 2001 is a work of fiction aren’t you? Making mistakes is an unwanted property in people as well. Yet some how we manage to struggle on with out going crazy.

We have simple machines that make decisions on uncertain and incomplete information all the time. The radar sensors than open doors as people walk through then use incomplete and noisy data to open the doors. Sometimes the open the doors by mistake even though the thing near the door was not trying to go through.

A machine is planned, constructed, tested, reverse engineered. None of these simple properties apply to the human brain; Of course there are chemical/physical processes going on, but to me, there is more.

I understand that the purpose of a concious AI would be to take over processes, that the human brain couldn’t handle anymore. Those complex processes would make a concious AI inefficient since I would doubt itself while decision making, favour itself and not it’s inferiour human master etc. It is an endavour without sense, so any project to do it would fail.

Well, I was going to whip one up in my basement lab, but I decided to go see Attack of the Clones instead. I’ll get to it tomorrow.

Computers are very simple state machines. They bring numbers in, process them, and put them out. The “thinking” part of a computer is very very simple. Computers get “larger” because they get bigger and bigger arrays of memory storage elements. The actual processing element is still a lot like fancy machine with someone turning the crank (its actually clock cycles not a crank, but it’s the same idea). The important thing is that each memory element is connected to the processor, but to nothing else.

The brain, in contrast, has every bit connected to every other bit. It’s not so neatly organized. Each nueron is connected to neurons next to it, there is no one central processing element. Neural nets are easy to build, and we can even understand them, as long as we make them incredibly small and simple. Once they start getting up there in size, we lose our ability to figure out what the heck they are doing, because the interactions just get too darn complicated. A good example is a neural network that the military designed to recognize tanks. They “trained” it by showing it different pictures, some with tanks and some without, and they thought that their network was doing what it should. It turned out that all of the pictures they had shown it with tanks were darker than pictures without tanks, and the neural net had “learned” based on the darkness of the image (and was therefore completely worthless in processing images to looks for tanks).

That’s a pretty good example of how limited our understanding of neural networks is, and how primitive of a machine we are capable of building at this point. We have a long way to go.

Do a google search on artificial intelligence. Should provide a few years of reading material to keep you busy if you are really interested in the subject. Chatterbots and the turing test are two other subjects to look up along the same lines.

Your brute-force method of creating a conscious computer assumes that it’s easy to create an electronic circuit that serves the function of a neuron. It isn’t. There are some approximations out there, but no working model of an artificial neuron has ever been built (as far as I know). Once that happens, we can talk about making 100 billion of them.

That’s why most AI research doesn’t try to build brains. Instead, they try to model the behavior of brains in software. At every step of the process, they discover some new level of difficulty to the problem.

The answer to your question is, in essence, “because they don’t know how.”

they don’t build one for the same reason that you haven’t; they don’t know how (although this may change).

Here’s a related current thread in Comments on Cecil’s Columns: Consciousness

Like what kind of things beyond the chemical/physical processes?

Or take over things that people don’t like doing.

What in the world makes you say a conscious AI would be inefficient because it does complex things? You are just making stuff up here.

Neurons, operating with chemical signals, are much slower than an electronic device.

engineer_comp_geek mentioned artificial neural networks. These are basically software programs that model how a set of (abstract) neurons works. These things are still being investigated, and I doubt anything exactly like consciousness would ever come from a simple ANN, but it’s a starting point.

In theory, there is no reason why an artificial consciousness cannot be created, but nobody knows how to do it, not even in theory. I suspect that in years to come the progress will be very slow but steady.

Thank you engineer_comp_geek, you hit the nail on the head.

In terms of raw processing power, modern computers come close to the human brain, and will probably surpass it in a few years. The difference is that neural networks are millions of simple processors operating in parallel, and CPUs are one amazingly-complex processor operating in series with itself. We could, in another 10 years or so, fabricate a processor comprised of billions of tiny neuron-like processors which would be capable of mimicking the human brain structurally. The hard part is writing the software that would take it from a hunk of silicon to a thinking entity.

Modern CPUs are complex, but their operation can be broken down into a logical series of deterministic, mathematical operations which are used to build more complex functions and from them, fantastic programs. This is why we can program them so easily, we know exactly what’s happening at every point along the way. In neural networks, we pretty much know how individual neurons act, but the complexity of the system is not based on how one super CPU-neuron acts trillions of times each second, but on how a billion simple neurons interact with each other thousands of times a second. The ghost is in the signal, not the machine.

Imagine an electrical impulse, branching out like lightning across thousands of dendrites, converging on some axons which then fire and spark new dendrite lightning pulses, all in this ever-changing electric feedback system. It’s like holding a microphone in front of a speaker, only with millions of mics and speakers arranged all around the room with wires going every which way. It’s like that episode of Mr. Wizard when Don Herbert himself laid out a matrix of mousetraps with ping-pong balls resting on their springs. He dropped one ball into the glass container confining them and instantly the whole system was a chaos of popping balls triggering other balls to pop. Your brain is the same only your neurons reset after each firing, ready to fire again. You can imagine how this could go on forever (consciousness?) and how hard it would be to organize that mess into anything useful.

The problem isn’t figuring out how to fabricate these systems, it in learning how to program them once we make them. This sort of thing requires a new paradigm of software development, one which might take us 20 or 30 years to figure out. Remember that evolution has put just as much time into our brain’s wiring as it has into its composition. To mimic a human brain we first have to know how it works, exactly, and I mean down to each individual wiring, because our brains look like the power lines running under NYC. And if we get one of them wrong the whole thing might short out.

Nature didn’t design us like we design computers, with math equations scribbled on a chalk board and an industrial fabrication plant. It built a matrix of neurons and let evolution do its slow work over hundreds of millions of years of trying and training.

That’s an awful bunch of useful information; thanks to everyone!

So in summary, the main problem apparently isn’t providing brute calculating power but imitating the complex structure in which neurons are interlinked. This makes me wonder: Is the structure of the brain exactly the same in every individual? The answer is most likely no, I suppose, if exactly is defined as absolutely a hundred per cent exactly the same, with the same number of neurons linked to each other in the same way. But the general outline should be the same, no? Or are there people whose brain structure is quite a lot different from other people’s while providing the same functions?

Don’t forget that brains have been being trained by evolution for several hundred million years. And that is only vertebrate evolution.

Neurons differ from transistors in many ways. For one thing, while a neuron is either firing or not firing, it is connected to hundreds, sometimes thousands of other neurons by axons (I think I have that right) some of which stimulate and some of which inhibit the neuron they connect to. Whether a neuron fires is determined by whether the sum of the stimuli less the sum of the inhibitions passes a certain threshhold and even that is probably a wild simplification. All this is done in parallel without (apparently) any central coordination. Each firing is measured in milliseconds, something like 6 orders of magnitude slower than a transistor, but there are an awful lot of them working in parallel and they make up in numbers what they lack in speed. Yes, we take second place to computers in basic arithmetic, but our language processors are light years ahead of anything that machines are currently capable of. And consciousness is a mystery that no one has any handle on. Dennett wrote a book called Consciousness Explained that should have been titled something like, Towards the beginnings of having the merest glimmer of a research program that barely conceivably could lead in the distant future towards an extremely tentative theory of consciousness. Note that we could imagine a computer that might use language with some facility and still lack anything we call consciousness. And I leave aside all dualist theories that deny that consciousness is not purely mechanical. Leave them to the mystics and Roger Penrose (how the mighty have fallen!)

First off, I don’t think that’s really the main problem. The main problem is that no one really understands the hows and whys of consciousness, beyond that different aspects of it tend to be fairly reliably correlated to different regions of the ol’ noggin–i.e., a stroke or other lesion-producing event in a certain bit of (usually) the left hemisphere is going to severely hamper language use for awhile; getting a spike blown through your frontal lobes a la the unfortunate and famed Phineas Gage is going to tend to cause some wide-ranging personality and volitional changes, and so on. What is it about the structure of the brain in those bits that are tied to those effects? No one knows.

Comparing neuons and their interactions to logic gates may very well turn out to be overly simplistic, as well. This article is interesting, and I’m curious to see how research in coming years falsifies or supports the theory in it.

So anyway, the primary problem is that no one knows enough about how consciousness works to model the stuff properly–it’s very hard to deliberately create something without having a theoretical model that’s correct. The idea that simply by throwing circuits together in a sufficiently complex will result in consciousness reminds me of a comic panel that’s been xeroxed all over the place–I’m almost certain it’s on the door of at least one professor in every college in the world. Consists of a fellow scribbling on a blackboard, having written out three steps:

Step 1: <mass of equations>
Step 2: Then a miracle occurs!
Step 3: <final equation>

…with an onlooker saying, “Er, I think you might need to expand on step two a little.” (Egghead humor is fun.)

And nope, the structure of the brain is only similar between different individuals in the broader strokes–the area that handles vision is in the same place, both brain stems are handling the same things, etc. The “wiring” between the neurons is nowhere near the same–the brain literally wires itself as it develops (and in the first year or so of life, huge numbers of neurons that we’re born with die off as part of the initial wiring process–nothing to panic over, it’s supposed to happen that way), and there’s evidence that memory and learning occurs by the interconnections changing throughout life. Even if the interconnections are all that’s going on in the brain to produce consciousness (i.e., that field effect theory turns out to be a neat idea but wrong, wrong, wrong; sort of like phlogiston), I don’t think anyone could even begin to figure out how to model that algorithmically, in a form that a sufficiently powerful computer could handle.

Well then, I stand corrected.