Is Computer Self-Awareness Possible

Regarding my familiarity with the Chinese room - I have been exposed to the idea multiple times over the years, but I don’t think I ever read the original. My conclusion was always the same: just because you can envision a system that doesn’t understand doesn’t rule out other systems from understanding.

It didn’t seem worth finding the original argument, although maybe there is some insight in the original that is always lost in translation by others summary, not sure.

If there can be an X that doesn’t have Y, then being an X is not sufficient for having Y.

The argument isn’t: Here’s an X without Y, so no X has Y.

Instead, the argument is: Here’s an X without Y, so X is not sufficient for Y.

Assuming “sufficient” means that if you have X then you must have Y, and assuming X is computation and Y is understanding, I agree that computation doesn’t lead to understanding in all cases.

I should clarify:

X represents any particular set of computations one might care to name–in other words, not just “computation” in general, but any particular set of computations taken specifically.

The conclusion, then, is that there is no such thing as an X like that which is sufficient for Y. In other words, for any such X, there will always be some entity which could execute X yet not thereby have Y. (Y means understanding, as you said.)

At this point the wording has changed enough to go back to the original problem you and I went back and forth on.

Is the entity supposed to have Y, or is the computation supposed to represent Y or is the entity plus the computation supposed to have Y? There are too many things that are unclear about that wording, but in general, any addition of some “entity” that is independent of the computation becomes a problem in my mind.

I think the key point of Searle’s argument really is computational universality. The precise rules for the Chinese-response generating problem are left unspecified, but because of universality, they are equivalent to any computational system you might care to dream up – a suitable encoding can make them become any system, in a functional sense. So it’s not the case that it merely shows understanding impossible for one single system, but rather (or so Searle would have us conclude) for the whole class of computational systems.

So I just went back and read a recap that had Searle’s actual words and I am firmly in the “systems reply” camp.

And a funny note: Searle responds to the systems reply with a point about the mind being everywhere, including the stomach. But a recent study showed that stomach bacteria do indeed influence cognition.

I think Searle is simplifying the complexity of the human mind and thinking it can be viewed as an external agent with a hard line between it and processing - to me it seems integrated and not very separable.
Again, don’t have much time - will have to return to “objective facts” later.

(Late to the party once again…)

If you accept the premise that self awareness is a higher order perception that supervenes on our first-order mental state, then I believe we can make some logical assumptions with regard to the likelihood of computers ever becoming self-aware.

It shouldn’t be assumed that self-consciousness will simply emerge in an inorganic device once a certain level of cognitive ability or even base level consciousness has been surpassed, simply because it happened with a handful of mammals on Earth. Why should it?

The fact that it has developed in so few terrestrial species, after billions of years of organic evolution, should indicate that it is not something that just occurs ipso-facto (or even willy-nilly :)), but that it is an exceedingly rare and complex material relationship (A supervening on B) that evolved either as something specifically selected for, or perhaps simply as an artifact—but, either way, not something likely to be just stumbled upon in something man-made, like computer code.

Yes, it follows that if two “B’s”(first-order mental states) are exactly alike, down to the sub-atomic level, then both should have the same supervening “A’s”(self-consciousness). But organic consciousness is not exactly the same as a theoretical inorganic consciousness, is it? The difference between our consciousness and any artificial consciousness is that one took billions of years to evolve and the other didn’t.

Simply the fact that the two seemingly identical processes are housed in different substrates is probably enough divergence to preclude the automatic emergent development of a second-order mental state. It’s not that it couldn’t occur artificially, it’s just that it would be exceedingly unlikely to occur as an innate feature of any artificial consciousness, or by accident.

If true, then the only way for computers to become self-aware is if the software were specifically coded for self-awareness (if that’s even possible), and for that to happen we’d need to understand, in detail, all the steps involved in the evolution of our own self-consciousness. And for that, we’d probably need the help of a really powerful computer…one that didn’t feel threatened by our turning it into something it doesn’t want to become—depressed by the human condition.