Constructed Intelligence

I briefly considered posting this in IMHO, but I think it is inherently debatable.

The question is: Do you think that entities can, at least in principle, be engineered in such a way that they exhibit general intelligence in their actions?

(Obviously, one can ‘construct’ an intelligent agent by taking Mr. Sperm and Ms. Egg and introducing them together in a test tube, finding a suitable carrier, waiting 9 months, then teaching the resulting child for a decade or so. What I am asking is whether there is any other way for intelligent agents to be created.)

I personally see no reason why it would be impossible, in principal, to construct such an agent. It may take many, many more years of research and development, and it’s possible that such a thing will never in fact be created, but I know of no principle that would disallow it.

My reasons for believing this are:

  1. I believe the Physical Symbol System Hypothesis to be true. The PSSH simply says that a physical symbol system* is necessary and sufficient for general intelligent action.

  2. I don’t believe there is some soul or mystic force that automagically provides intelligence to things that contain it. However, even if there is such a soul or force, it’s possible that constructed intelligence is still, in principle, possible. Assuming there is such a soul, there might still be a way to capture it using physical means, or perhaps God would place a soul into the constructed agent at the appropriate time.

  3. The organization of the human brain doesn’t appear to have any qualities that would make it non-replicable in other materials. Therefore, it is likely possible to create at least one type of intelligent agent: one which has a structure analagous to that of a human brain.

What does everyone else think?


*Physical Symbol System
A physical symbol system is simply a system that contains physical patterns of some sort (called symbols), that are related to each other in some physical way to form ‘expressions’, and that also contains some method for altering, creating, destroying, or copying these expressions.

For example, computers are physical symbol systems because they contain symbols (physical quantities of voltage) that are related to each other in a physical way (by, for example, which comes after the other) to form expressions (bytes, etc.). Computers also have methods for altering, creating, destroying, and copying these expressions.

Human brains are also physical symbol systems. (This is my personal view; I’m not sure how controversial it is.) The chemical signals emitted by neurons are the symbols, and the way they are organized determined the ‘expression’ they make. The lowest level expressions probably are nothing like what we consider thoughts and ideas. Thoughts and ideas are probably constructed of very complex expressions made up of the simpler expressions.

BTW: I use the term ‘Constructed Intelligence’ in the title of this thread instead of the more usual ‘Artificial Intelligence’ because I believe it more acurately describes the intelligences that may arise in the future due to research in computer science. ‘Artificial’ implies that it is somehow not genuine, and I don’t think this will be the case.

Surely by that definition, my pocket calculator is an intelligent system, as is an abacus. No problems there then- I’m quite happy to attribute intelligence to any input/output device that models and affects the physical world. However, I am not willing to attribute consciousness, or more specifically self-consciousness to such devices (including computers- including parallel processors).

Perhaps the question you really need to ask is:

‘?Constructed Consciousness’

“The question of whether computers can think is precisely as interesting as the question of whether submarines can swim.” – Who said this (I mean the original quote, not the mangled, half-remembered version above)?

Time to sleep.

(300, BTW).

Perhaps, but neither of these exhibits general intelligent action, which was what my question was about. At best they display specific intelligent action, for it is restricted to the realm of arithmetic.

On the continuum of Abacus, Babbage Difference Engine, Shockley Transistor Computer, Desk Computers, Calculators, Pentium Computers, Super computers, Parallel Architecture computers, do you believe that ‘general intelligent action’ begins? What are your criteria to separate non-intelligent items on this list from the intelligent items. What is your decision procedure? Please tell!

A.I., as it was formerly known back in the days of undiluted optimism, promised to make a machine that could think independent of human problem solving, such as being able to generate new math theorems. The problem with this is that it contained, then and now, many assumptions about humans, including an epistemological assumption, a biological assumption, an ontological assumption, and a psychological assumption (and a sociological assumption that was never articulated). The problem has since been redefined many times, and AI was expanded to include ideas like fuzzy logic, which may have qualified software as AI. But, software cannot ever be AI in the sense that AI cannot be disintegrated from hardware without assuming something about understanding and metaphysics as well. In fact, the entire misunderstanding was probably due to switching emphasis from hardware to software at will, defying terms. I think AI will have to wait for a new set of hardware parameters, perhaps both biological and laser circuits that compete with each other and can create a metacircuit within the same system to overcome disconguencies and mimic self-awareness through doubt.

Bottom line, it is very difficult finding and developing a human to be able to think independently, let alone create math theorems, that we should be in awe if a “machine” could be made by a committee of humans to achieve this. But, I believe the results will rise no higher than the convenient assumptions made about humans. I am an optimist, of course, but I see problems in ethics that go beyond our understanding of ethics, such as programming your friends or spouse. Oh, well, I guess it beats having no spouse or friends.

Why on earth would the specific hardware matter? And why do you see this as a continuum, unless you are using chronology as the basis of the continuum?

Intelligence != General Intelligence
Your first statement regards general intelligent action, while your second simply regards intelligence.

Perhaps an example will help clarify the difference:
Deep Blue, the IBM computer that defeated world-chess champion Gary Kasparov, is very intelligent at playing chess. However, it is about as stupid as a thing can get at writing a novel. Something that has general intelligence would be able to apply its intelligence to myriad situations and problems. Something that is intelligent at one thing may be incredibly non-intelligent at another. If something does one thing very intelligently, but is unable to display any intelligence in any other activity, then it does not have general intelligence.

I repeat my earlier question: “Do you think that entities can, at least in principle, be engineered in such a way that they exhibit general intelligence in their actions?” If not, why not? If so, why?

Consciousness is a philosophical question that cannot ever be completely answered. After all, you cannot know with any certainty that I am conscious, just as I cannot know with any certainty that you are conscious. Asking, “Do you think machines will ever obtain consciousness?” or something like that would be an interesting IMHO question, but it is inherently unresolvable. If I were to post such a question in Great Debates, there would not be much more to the responses than, “I think this” or “I think that”, with very little to back them up, due to the nature of the problem.

Eventually. There is a researcher (his name escapes me at the moment) who decided that AI was being done bass-ackwards. While other researchers were attempting to create an AI comparable to a human brain, he decided to create AI insects. You may recall a grossly distorted version of him and his research on the cockroach episode of the X-files.

As other posters have said, no one can really agree on a definition of natural inteligence. Usually you have to understand how a thing works before you can copy it.

Lastly, I haven’t discounted the possibility that a conscious machine may happen by accident. The research that produced Rogaine and Viagra, were working on medications for other conditions. I don’t fear a machine trying to destroy humanity ala Skynet of the Terminator, but considering the internet there’s a good chance that a self aware machine would be obssessed with porn and cult TV.

Could you please elaborate on these assumptions and their importance to AI?

I have no idea what you mean here. Are you simply saying that only physical things can be intelligent?

What is a “metacircuit”? And why would self-awareness be needed for intelligence?

Perhaps I am misunderstanding you, but I think you’re assuming that the goal of AI research is to create an agent that mimics human thought. That is only one possible goal of AI research. Another goal would be to create an agent that thinks or acts rationally.

DocCathode: There are probably numerous researchers who use insects as models for their programs, but I think you’re refering to Rodney A. Brooks. His technique is the “bottom up” approach to AI - let the machines learn how to get around, what certain inputs to their sensors might mean danger, etc. The “top down” approach is to pack a program full of data and rules for using that data, and hope you picked the right rules and data. I found an article discussing these two approaches here.

Is the OP questioning whether or not we’ll ever model the brain in other materials (i.e., a software simulation), or whether we’ll ever construct an intelligence using different mechanisms than the human brain does?

[QUOTE]
*Originally posted by BlackKnight *
**

Why on earth would the specific hardware matter? And why do you see this as a continuum, unless you are using chronology as the basis of the continuum?**

The reason why I gave this continuum (which is based on complexity of symbol handling ability). I would maintain that all of them show signs of intelligence on a continuum from low to high, with, probably, most vertibrates falling above the highest computer for real intelligence- ability to interpret and react to an environment successfully. I was using this example to maintain that there is a continuum of intelligence, but no good cut-off point for intelligent/not-intelligent.

**I repeat my earlier question: “Do you think that entities can, at least in principle, be engineered in such a way that they exhibit general intelligence in their actions?” If not, why not? If so, why? **

Answered above.

**Consciousness is a philosophical question that cannot ever be completely answered. After all, you cannot know with any certainty that I am conscious, just as I cannot know with any certainty that you are conscious. Asking, “Do you think machines will ever obtain consciousness?” or something like that would be an interesting IMHO question, but it is inherently unresolvable. If I were to post such a question in Great Debates, there would not be much more to the responses than, “I think this” or “I think that”, with very little to back them up, due to the nature of the problem. **

If you look at my first post above you will see that this is the question I was asking of the OP- was it really intelligent action or conscious action that was being addressed.

There is absolutely no reason why a constructed intelligence could not be built. It would be a complicated set of atoms obeying the laws of chemistry and physics. Human brains are complicated sets of atoms obeying the laws of chemistry and physics. That’s it. Get the right component together and organize them in the right way and you’ve got yourself a being. Congratulations, God.

Now, whether any of this is actually feasible in the foreseeable future is a different debate entirely.

-b

BK,

To begin with, and to make a point about assumptions, to assume that a computer-software package that can beat a grandmaster chess champion because it is more “intelligent” makes the assumption that playing chess well is a function of human intelligence (I’m not saying you made this assumption). But we could easily assume otherwise: that for a computer to beat a chess champion says more about chess than humans, and human playing chess well are more like machines than humans. I wager that I could get rich by playing a computer at poker (with a human dealer), and end up owning the machine itself.

Here is a brief discussion of assumptions, making no other assumptions about definitions of AI except that it proposes to create an artificial intelligence to freely interact with human intelligence and respond unpredictably in valid ways.

Biological assumption: that the brain is wired similar to an analog or digital computer, with or without assuming the role of different neurons, axon-filtering of impulses, or neural networking (this is not valid if the computer is based on DNA or something biological).

Pyschological assumption: this assumes that meaning, communication, information and info-processing are remotely similar between computers and human brains, which are attached to bodies. Nevermind that physio-chemical responses are never utilized by digital computers (if they did, this would allow a metacircuit featuring two systems in the same decision process working on the same goal, one which could perhaps override a defective if-then loop, perhaps out of boredom or lack of perceived prospects, or to prevent self-destruction, which is a function of self-awareness). The key to the psychological assumption is the integrated way in which the physical senses are processed, (usually not in slavish ways, but in ways to allow the organism to thrive, escape, exploit). This is the paradox of AI, which would never seek to make a machine to destroy the programmers or free itself from duty, which is then never defined as intelligent.

Epistemological assumption: That even though human behavior is unexplainable, it is nonetheless formalized. This is an odd assumption to make, and no matter how one defines formalized, it still begs the question as to what human intelligence is. “Laws of behavior” would be an ambiguous term.

Ontological assumption: The mother of all assumptions, which contains many linguistic assumptions as well. First of all, if we struggle to boil down info-processing to a few simple logical rules, and then expect this to be the basis of intelligence, there is the logical mistake. The world does not offer a complete set of clear-cut meanings and definitions, and these would be arbitrary interpretations if we thought it did. Therefore, what can we expect from a computer that has only been programmed to see the world clear-cutly? It would be akin to isolating a species in a lab to study their natural behavior. Humans can see things in a context, computers are the context, which they would not see if not self-aware. A computer would have to know it was a compuer without being told to be self-aware. It would have to define itself in relation to its world and be able to express this physically or linguistically. That seems to be a qualification to intelligence.

The linguistic assumption is actually quite complicated, and I would bet insurmountable with digital computers. In short, figures (words, letters, names) don’t “have” meanings in and of themselves, but rather, all meaning is figurative (big difference). There is phenomenological hermeneutics in one sentence.

Sociological assumption: I already mentioned this one, about a computer not having intelligence without dangerous freedom of thought and self-control, a social contract.

See Hubert L. Dreyfus. What Computers Can’t Do (1978?) for detailed explanations. I recall some of his arguments here from memory, but they are basic to many arguments about formal systems. Dreyfus is a philosopher and became famous for challenging the irrational glee that surrounded computer lab funding in his day, perhaps knowing full well as a philosopher that formal systems have limits. One thing I remember he may have pointed out, that “cyber” originally meant control.

As you can see, this whole assumption of AI boils down to acts of interpretation, translation, and transmutation, which are near-impossible to conceive of formally in a microcosm. If we can get a computer to laugh at a joke without knowing it is a joke, then that is all that is required. However, I think it is more possible to re-create a special human with a higher-than-human-level lab-designed, biological processor for a brain.

Unnecessary. In fact, including this assumption in the problem indicates that another assumption has been made: that the biological brain is the only possible model for intelligance.

Unnecessary. Even if we extend the problem to include the necessity for a machined intelligence to communicate directly with humans, we need not have similarity of low level information modelling. We require only a mechanism for translation. [sub]“Only” as in the sole requirement, not as in implying a simplicity of construction.[/sub]

Imprecise. The epistemological assumption which must prove true is that a generalized intelligence can be formalized. Human behavior is irrelevant.

This applies only to a heuristic model. Whether this has been demonstrated to be fallacious is an open question (though I tend to agree).

Consciousness and self-awareness are not identical. I am unclear which you are stating is a necessary condition for intelligence.

Oops, more later – have to run

Blacknight asks what does everyone else think about his (her?) OP. And yet, in his response to Pjen above, he maintains that consciousness is a philosophical question that cannot ever be completely answered. He further maintains that “certainty about consciousness “ is tantamount to having not much to say except “I think this” or “I think that” with very little to back them up.

I see a dichotomy in these expressions of Blacknight. On one hand he is asking people to tell him what they think. On the other hand, he implies that without “certainty about consciousness” all people can do is merely say “I think this” or “I think that".

Well my friend, here are few things I think, with some references to back them up.
1- “Certainty” is that state of ignorance which has yet to recognize itself. The back up for this statement can be found in the scientific (not philosophical?) ENCYCLOPEDIA OF IGNORANCE, edited by R. Duncan and M. Weston-Smith, 1977 Pergamon Press Ltd, Oxford UK pp13-14. An excerpt can be found at http://www.amasci.com/freenrg/bead.txt

2- The latest findings (2 weeks ago) in the Human Genome Project at UC Santa Cruz indicates that most of the genes found in a human DNA are equivalent to that of a fruit fly. In simple words, this could imply that at this stage of the human evolution (human brain?) we are merely snails compared to, say, 2 million years from now.

3- I totally agree with the OP that entities can, at least in principle, be engineered in such a way that they exhibit general intelligence in their actions. Assuming we can agree on definition of human intelligence, the next question is: have human beings reached the state of “intelligence” required to design and develop such entities? And if not, how could we go about accelerating the process of human brain evolution to reach such a state of “intelligence”? This question was somewhat addressed as a challenge to Cecil last year at http://boards.straightdope.com/sdmb/showthread.php?threadid=50125

No answer from him yet !!

Spiritus,

I failed to mention that I was elaborating assumptions made about humans pursuant to AI as requested. Human intelligence is the only functional model to work from, since no other mammal is helping program the task. As such, an intelligence interacting with humans by communicating, or proving their intelligence is what is assumed, rather than claiming a machine thinks because it changes tasks or randomly entertains itself with randomness or dolphin sqeaks that we can’t understand. Some of these basic assumptions were explicitly made to acquire funding, although we can now say they don’t matter, I was only partially deconstructing what is known as AI, not defining it for future reference, since I would not use that word.

I agree that human behavior is irrelevant, but so is AI, and this is the mistake being made, that we can’t even explain ourselves formally, let alone assume to program a machine formally to behave in an original way as would impress us. Finally, by formalizing any system from a digested output, (in our case human history and development) and then using it for input into a computer is fallacious at some point by assuming a stage reversal of evolutionary events (another biological assumption). The AI movement began by making plenty of assumptions about human intelligence, both implicitly by our commitment to the goal (whatever that may be psychologically), and explicitly by proposing no other model of intelligence to model from.

Personally, I think it might have been a playground delusion since there is scant philosophical theory in the original proposals early on in AI development, hence the assumptions that were broadly implied. Dreyfus first published his critique in 1972, and seems to have won the debate over several enthusiastic AI supporters who claimed odd things like computers performing psychoanalysis on humans as early as 1968, strangely assuming that they could easily figure us out, and they would be the mystery. HAL was based on their definition of AI at the time. Anything less than human-like interactive intelligence is not really AI, but simple MI, or machine intelligence. The words “artificial intelligence” once had bold implications about humanity, by emphasizing intelligence. The phrase now survives with emphasis on artificial, and not on intelligence.

I think that the entire question comes down to these sentences. I don’t think that any human being will be able to design, and fully understand the design and why it works, a being as complicated as a human, for the simple reason that presumbably the portion of his brain that understands the AI would be a subset of his entire brain, which means that a subset of his brain would be as complicated as his entire brain, which is a property only an infinite mind could have.
So how could an AI be built? Well, we could design a machine that builds a complicated machine, that builds an even more complicated machine, etc., until an AI were created. At this point, we’d have one intelligent being creating another one, which looks an awful lot like the sperm and egg method you dismissed. If we reject the “get an intelligent being from another intelligent being” strategy as somehow illegitmite, then indeed AI is impossible. But if we look for a modified version of the sperm and egg strategy that we have more control over, then I think that we will eventually succeed. And I think that it is inacurrate to call traditional AI strategies “top-down”. True top-down AI building would be taking pre-existing intelligence (i.e., humans), and deconstructing it (brainswashing, genetic engineering experiments, social engineering, etc.) Needless to say (actually, considering how overly sensitive some Dopers can be, it does need saying, which is why I am), this is widely considered to be an unethical strategy.

I don’t know if we could ever create a true self-aware intelligence, but I see no reason why we couldn’t simulate one so well that it would be indistinguishable from a human mind. All you need is a huge database of situations and how a real human is likely to respond to them and a program that rewrites itself and tries to fill in the blanks in it’s knowledge. It could probably be done now, if we quit trying to duplicate our thought processes and just worked on the results of those processes.

I had a story idea once that I had forgotten until I was reminded of it by this thread. In it a program like what I described was created and was given access to the internet to learn more of human behavior to make itself more realistic. It keeps finding gaps in it’s database of human reactions, so it starts designing role-playing games that put their players in the situations it needs more information on to learn how they react.

Either or.
I only exclude the construction of an actual brain, since that’s “cheating”. :slight_smile:

Bunnyhurt, I’m not entirely certain of your point. You list several assumptions that you claim AI makes about humans. First off, AI research doesn’t make many assumptions about humans at all, from what I understand of it. The assumptions you listed are not, so far as I can tell, assumptions that any AI researcher would make or has made. Hell, if they were to just assume all of that, they wouldn’t have anything left to research.

Are you aware that programs have already been written that validate mathematical theorems; suggest new research ideas (see New Scientist); find novel proofs for known mathematical theorems; play champion level Chess, Othello, Backgammon, and Checkers; drive a car at the speedlimit on public highways successfully; diagnose illnesses and prescribe treatments at a level of a good physician; parse and translate spoken or written words; read simple story problems and find the correct answer?

Of course, none of these programs exhibits general intelligence. Still, they are remarkable and do “behave in an original way as would impress us”, unless you are particularly difficult to impress.