If man could create a computer that thinks – really thinks – and is self-aware, would that be a good thing? Would a sentient computer still be just a machine, or would it become a “being”? My personal belief is that if a computer were self-aware, the fact that it was a consciousness stuffed into a box would make it no less a being than any other intelligent and self-aware creature.
What moral obligations, if any, would the creators of such a “being” have to their creations? What moral authority over them?
A servo-mechanism which merely responds to electronic stimuli and performs programmed actions is useful, but not “owed” anything at all. Maintenance is done not for the machine, but to make the machine able to do it’s owner’s work.
But a sentient machine might choose to do or not to do it’s maker’s bidding. Would maintenance be a moral obligation of the maker? Would pulling the plug become a type of murder?
Many people think a god created man with the capacity to reason, and that man’s use of that capacity caused the maker no end of trouble. Would man’s creating a mechanical “mind” be equally troubling?