I think that when you get right down to it those who think AI is impossible on the basis of the Chinese Room concept are in some way saying that a “soul” or other thing not emergent from the hardware is the basis of intelligence. (I put soul in quotes because I am not saying this requires any sort of god belief.) Assuming you are not limiting the size or the interconnectedness of the things in the room, you could model the brain exactly, since the brain is clearly made up of components and interconnect. As mentioned above, the Chinese Room model does limit the interconnectedness. I don’t think you’re allowed to write new cards, for instance. But the deeper implication is that something non-physical has to be going on for intelligence to emerge.
As for complexity, too little makes intelligence impossible but more doesn’t guarantee it.
I know there are plenty of people who think that, but I’m not sure that Searle is saying that – he says in Are We Spiritual Machines? that:
“Actual human brains cause consciousness by a series of specific neurobiological processes in the brain. The essential thing is to recognize that consciousness is a biological process like digestion, lactation, photosynthesis, or mitosis …”
and
“The brain is a machine, a biological machine to be sure, but a machine all the same. So the first step is to figure out how the brain does it and then build an artificial machine that has an equally effective mechanism for causing consciousness.”
and
“We know that brains cause consciousness with specific biological mechanisms…”
I don’t think Searle’s Chinese Room is intended to argue that something non-hardware-dependent or non-physical is needed for consciousness (although there are plenty of people who think that, and other authors in Are We Spiritual Machines? argue that, I’m sure, since the book is published by Intelligent Design advocates); I think it’s intended to illustrate that it’s possible to appear conscious without actually being conscious. But I think it fails miserably at this, and that Kurzweil is vastly more convincing than Searle. In response to Kurzweil saying that if a computer can convincingly make a case for its consciousness, then we should accept it as conscious, Searle actually makes the following argument:
I remember reading that passage at the airport and thinking “Is he yanking my chain?”
All reasonable. There are certainly many ways to implement something.
The whole reason for the Turing test is that there is hardly any way of telling if anyone or anything is conscious. We might ask the computer about its thought processes in any desired depth. I don’t think anything that doesn’t do self examination can be conscious. The passage that you quoted illustrates that some people seem to have this reaction against any machine being called intelligent.
I don’t see the Chinese room having any purpose but what I mentioned. If it was somehow able to examine its own thought processes, why couldn’t it be conscious? There is an assumption that someone would be able to write cards for any eventuality, but by making one answer dependent on the last, and allowing say 10 choices each not that many steps, you can get more cards needed than atoms in the galaxy. So your intuition about it being a simplistic exercise is something I agree with.
Might as well have. The phrase “reconfigurable hardware” does inspire some nutty thoughts and excessive flights of fancy…
Well, that’s not quite right when we’re talking (dreaming?) about actual AI, right? At least, that is, AI that makes it out of a lab. I should think five 9s is a minimum goal; even better if it can be adaptively self-healing.
That sort of confirms alterego’s ref; I’ll look into it. (Amazing what one can find on the IBM site…I’ll have to set aside a couple hours, as I know I’ll get distracted by unrelated stuff.)
For some reason locating this was a real strain on my Google Fu! But here it is: IBM’s eFuse. Somewhat surprisingly, this tech seems to have made it into the Xbox 360.
Reasonable, although he’s just assuming that something special in neurons creates consciousness, whereas Kurzweil argues that it’s an emergent pattern. But I was quoting those to show that he isn’t arguing for a non-physical soul, but for something neurons do to create consciousness.
Indeed. But his argument was basically “Kurzweil said a machine that claims to be conscious should be believed, but I can make my computer say that right now by typing PRINT “I AM CONSCIOUS” in my BASIC interpreter and running the program, so it means nothing!” Which is such a ludicrously stupid version of what Kurzweil actually said that I get the impression Searle is trying to pull a fast one, hoping we won’t notice that he entirely left out the part about the imagined machine having to argue for its own consciousness convincingly, not just display “I AM CONSCIOUS.”
As I understand it, the purpose isn’t to argue for the necessity of some kind of soul, but to argue that if you created a neural network (or whatever) on a computer that learned to act like a human and even eventually passed the Turing test, it wouldn’t necessarily be conscious, because there’s something special about neurons that creates consciousness that a neural network wouldn’t have, no matter how conscious it seemed.
Again, I just don’t buy this – I think if a thing convincingly seems conscious, it almost certainly must actually be conscious. The Turing test would, therefore, be sufficient (although not necessary) evidence for consciousness for me. If Searle is right about there being something in neurons that is required for true consciousness, then I expect no simulated-on-traditional-hardware AI could ever behave as if it were conscious.
Ah. Kind of like a hive mind thing, composed of neurons. That’s so bizarre I never considered it.
Haven’t these people ever heard of Eliza? That nonsense got dealt with when I was in college.
But that is an assumption Searle makes but can never prove. If everyone saw a conversation with the Chinese room, and it seemed conscious, why not, just as easily, say the cards have that special something? And it still sounds like Searle is assuming a kind of soul, albeit one broken into little bits of soul. The extra something is added at the neuron level, not the brain level. So, I think I still feel what is going on here is a rejection of a purely physical explanation of consciousness.
Sure. But the nice thing about “brain” is that it provides an existence proof that intelligence can arise from matter (assuming one doesn’t posit supernatural forces). Since it’s the only one we have at this point, it makes a good target.
Busy day; thanks for that. I’ll read it when I get the chance, along with the eFuse reference.