I’m comfortable with the Turing test for reasons discussed below. The question of “consciousness” is interesting but, IMHO, largely irrelevant to the question of whether AI is possible.
I have two problems with your cubic zirconium analogy. First, gemologists and many laymen can tell the difference and that’s enough to make a big difference in the marketplace. More importantly, the question is not whether two things are “the same” – the question is whether they share a certain abstract quality that can be evaluated only by appearance/behaviour.
IMHO, the only reasonable way to decide if an entity is intelligent is to see if it behaves intelligently. I believe the Turing Test captures this notion adequately.
So perhaps a better jewelry analogy would be to ask the following question:
Okay, let’s start out with more about the Turing test. My biggest objection to it, as described, is that it seems to define intelligence as that which behaves in a superficially human manner. Should that be the definition of intelligence? Does intelligence have to look human to be intelligence? An alien intelligence that communicates and solves very complex problems beyond our understanding, say by manipulating magnetic fields as its symbol system, would not be considered intelligent. Yet, a good imitation of human behavior would be considered as intelligent. A good forgery of a DaVinci, even one able to fool experts using current techniques, is still a forgery, not an original work of art.
Interestingly, Hofstadter went on, in “Fluid Concepts and Creative Analogies”, to characterize the Turing test as a probe of cognitive mechanisms:
His recent work has focused on modeling cognitive processes in microdomains with special interest in modeling analogies and applying analogies in fluid manners. The “how” matters.
More on “consciousness” as a resonant state: Any one who is interested in cognitive mechanisms, either human or artificial, owes it to themselves to become familiar with Stephen Grossberg’s work on Adaptive Resonance Theory. He was one the founders of self-organizing system and neural networks, though others have gotten a lot more press with much weaker models. Lists of his articles including those available on-line are here and here. The article that best reviews the argument that all conscious states are resonant states is here.
The Turing Test was never really intended to provide a definitive definition of intelligence, but to avoid the problems associated with our lack of a definition.
We acknowledge humans to be intelligent (rightly or wrongly). If a computer can act in such a way that we mistake it for a human, then logically it has as much of a claim to intelligence as any human does.
Turing was counting on the inability of his critics to seriously consider the possibility that humans aren’t intelligent… and he was right.
I concede that an intelligent entity might not necessarily pass the Turing test. You don’t even need to leave the human race to see this. For example, a Chinese person who spoke absolutely no English probably couldn’t pass the Turing test (if administered in English).
On the other hand, an entity that does pass the test should be considered “intelligent” IMHO.
However, I don’t think this distinction is important to the question of whether AI is possible. If it is possible to construct a device that passes the Turing Test, then AI is possible. Conversely, it seems to me that if we do construct an intelligent device, it won’t be much of a leap to get that entity to pass the Turing Test.
Like I said before, the question is whether the artificial entity has some quality possessed by the original, not whether the two are the same thing. Clearly an AI Device would not be human.
matters to what? to whom?
I agree that the “how” is interesting. Perhaps it is necessary to understand the “how” in order to construct an AI device. But in deciding whether an entity is intelligent, we need only evaluate it’s behaviour. The “how” is irrelevant. (Excepting situations such as “the Turk” of course).
It matters to cognitive scientists, whether their focus is human intelligence, animal intelligence, “hive minds”, or artificial intelligence (single machine or distributed multi-agent). It matters because “we” are not only interested in what it currently does, but what it will be able to do.
Any thoughts on what I’ve proffered from Pinker, Hofstadter, and Grossberg by way of possible definitions?