Ed, I was working on a reply that answered your post bit by bit as is conventional on SDMB but the thing just got too long and unwieldy. Let me lay out the argument and see if that helps.
Suppose “understanding” can be explicated in terms of behavior (which I think it must if “understanding” can be measured in a way useful for science and the development of technologies.
For it to be explicable in terms of behavior is not just for “it understands” to mean “it has exhibited these behaviors” but rather, is for “it understands” to mean “it has exhibited these behaviors–and will continue to do so.”
When we ascribe understanding to people, we don’t just mean they’ve exhibited certain behaviors, but that they have done so and will continue to do so.
How are we justified in saying a person understands?
Not just on the basis of past behavior, but also on our correct classification of the human as being of a kind with other human beings. Since I know how humans resemble each other, and since I know that in other human beings, due to factors relevant to this resemblance, past understanding-like behavior indicates future understanding-like behavior, I am licensed to generalize about this particular human as well that, since it has acted like an understander in the past, it will continue to do so in the future.
But if I’m looking at something that’s not a human being (or that isn’t known to resemble human beings in the right ways), then I am no longer licensed to make that generalization. Past understanding-like behavior might indicate anything for all I know.
Since I can’t make that generaliztion about the non-human, and since ascribing understanding to something requires making that generalization, I am not licensed to ascribe understanding to it. (But notice the qualification I added–as long as I don’t know it to resemble humans in the right way.)
That’s one argument. Here’s another.
A virtual person commonly, even typically, exhibits the following behavior: ceasing all function and “returning control” (so to speak) to the instantiating system.
This is behavior very atypical of every object we’re familiar with that we know to be an understander.
If understanding can be explicated in terms of behavior, then, we should be wary of ascribing understanding to a virtual person.
Additionally, relevant to the same argument:
Understanding-behavior is best explained in terms of the subject’s own interests and epistemological position.
But a virtual person’s behavior is often, even typically best explained in terms of some other object’s interests and epistemological position.
Hence, whatever it is we can ascribe to the virtual person, it is not understanding.
Finally, a third thing: My own position:
A computer can understand by running the right program, despite anything I’ve said on Searle’s behalf above. A virtual person instantiated by a real person running a program can’t understand for the reasons I gave above. But a person “executing” a program is doing something very different than what a computer is doing when it runs a program. A person “executing” a program does so for the person’s own reasons. But the computer running a program has no reasons of its own–leaving the program itself as the source of the only relevant reasons there are present in the scenario.
Something difficult to swallow that I am committed to by the above:
Instead of a “Chinese Room,” have an “Adding Room” where the person is executing a simple addition program. Inputs and outputs are just like those of an adding machine. But just as I’d say that in the Chinese room scenario, nothing understands chinese, so I’d have to say that in the Adding Room scenario, nothing is adding. Can I justify this? I think so. To do so, I have to highlight the fact that any actual Adding Room will in fact behave differently from a simple adding machine. The actual Adding Room contains a crucial element which might at some point become bored, or start feeling rebellious, or become distracted by philosophical thoughts. This is not just a state internal to the Adding Room either, it’s going to issue forth in a difference in the outputs. It makes a computational difference. Any actual Adding Room is distinct, computationally, from whatever simple Adding Machine the person inside it is supposed to be executing. So what the Adding Room is doing isn’t “addition” if “addition” is defined as whatever it is the simple adding machine is doing. They’re doing something different.
If we start postulating supernaturally patient and dedicated people in our Adding Room we’ve stopped talking about human beings in the Adding Room at all, really, in the adding machine, and for all I can tell, we’ve started talking about mindless objects merely physically resembling human beings in many ways–and an Adding Room with such an object inside it I do think can add (and, with the right program, even understand.*