I read the thread about whether a sentient computer would develop a belief in God with some interest, but I confess I found some of the contributions hard to follow because they assume a fair bit of knowledge of the debate.
I was wondering if we could go back a bit, for the benefit of people like myself (still trying to work out how the human mind works and extremely hazy about computers) to establish where this debate has got to.
Some of the problems I have with the idea of an intelligent computer are these:
-
Are you talking about creating a computer with a human-type intelligence, or a computer that is intelligent, but not necessarily in a human way? Even if it was intelligent in a non-human way, it would have to know it was intelligent, wouldn’t it, otherwise it would merely be efficient (like current computers!).
-
Without a sensible definition of what is meant by intelligence, what do you mean by “artificial intelligence”?
-
Lets imagine that you could “program in” all the major intellectual elements of the human mind - memory, language-ability, reasoning power, working memory, etc, etc … where would the computer get the motivation to use these abilities? Why am I sitting here typing this, rather than learning Japanese or cleaning the toilet? Because, for complex current and historical reasons, I want to. So, how would you get a computer to want to do A rather than B? Intelligence seems to need some sort of emotional force before it can get off the ground - I can’t see how an artificial mind would work at all without first having a sense of priorities, however simple, and “a sense of priorities” implies emotion.
Can you envisage artificial curiosity, artificial emotion, artificial imagination? My guess is that intelligence, (or perhaps I really mean consciousness), requires a sense of self, and I’m not sure that a sense of self can exist without a biological body.
I suppose it is possible to imagine that one day we grow a body in a lab and put the artificial intelligence in it, but I think what really interests me about this debate is not SF speculation, but the fact that if forces you to identify the characteristics that make us human. If we can build abilities X Y and Z into a computer, what abilities cannot be built in, and why?
Any thoughts will be read with interest…