Do Sentient Computers Have Rights?

Singledad:

There’s no way I can answer that, really. Your point is well-taken, though–that to reduce humanity to a sum of organic parts is to eliminate our most defining quality–the mind.

But I would suggest that you are ignoring a basic fact. Your mind exists within your body. It exists because of your body. It is a function of your body.

You cannot divorce humanity’s observable qualities from their source. You cannot distill our essence that way any more than you can understand the nature of a disease by observing its symptoms. Unless you want to bring the idea of the soul into this–and I haven’t gotten that sense–then I think it needs to be recognized that the mind is part and parcel of the body, not a separate and excisable thing.
I would also be interested in hearing your response to my previous comments on parasitism.


Ignorant since 1972

Please note that as yet I have not taken a position on whether computers can be termed “sentient”, whether they’re “alive” or deserve “human rights”. I’m not sure I have one yet. I’m having altogether too much fun just sniping at definitions! :smiley:

Nen:

None taken!

But you’re creating an analogy between descriptive quantititive measures (baldness and richness) and a fundamental definitive quality (sentience, aliveness, humanity).

I can be partially bald or semi-rich. Analogously, my brain can be partially organic, partially mechanical.

But I submit that one cannot be partially sentient, partially alive, or partially human. One either is or is not. Therefore the “slippery slope” argument is valid and not a fallacy.

The only alternative would be to accept the “one neuron” principle: Either I’m no longer a sentient human the moment one neuron is replaced by silicon, or I am a sentient human as long as I have exactly one functioning neuron. Both of those assertions seem absurd.

I’m making the point that the qualia (internal experience) of our mind, and not the details of the body, defines us as sentient humans. Certainly it is the details of our body that makes us organic, but that, I think is a trivial point.

Life is a much thornier question, and I think that I’ll stick with sentient and human.

BTW: You are right, I use organic to mean compounds made with carbon, with the additional proviso that such compounds are biodegradable by ordinary earth organisms.

aschrott:

Certainly the mind must exist somewhere. No software without hardware! But I’m saying that the specific details of the hardware are irrelevant.

Certainly the specific details of our qualia arise from the details of our brains. However, now that they exist, those qualia could very well be duplicated on an entirely different sort of hardware, as long as that hardware duplicated the essential features of our brain.

However, if we were to create an entirely artificial (in the sense that it was an artifact) intelligence, there’s no reason it’s qualia would or should duplicate that of humanity. Which is why the Turing Test is not a complete (or even very good) test of consciousness. It is, however, a perfect test of humanity.

Sorry to backtrack here… To paraphrase, you pose the hypothetical: Would I trade a “sentient” computer’s life for my own.

Well, that’s kind of what we’re trying to figure out here. But I will say, the decision would not be predicated on organic-ness or the quality of the body. I most certainly would not trade Christopher Reeve’s life for my own, despite the fact that his body is certainly vastly less useful than my own. I most certainly would trade a cow’s life for my own (indeed I do so every day) despite the fact that a cow’s body is vastly more useful than my own body.

Aschrott, Could you summarize your position on parasitism? I don’t think I understand it well enough to agree or refute it.


I sucked up to Wally and all I got was this lousy sig line!

That’s my favorite thing about this message board. The devil has many advocates here :slight_smile:

I’ll have to get to this one after lunch…it takes me a while to be articulate. Same Bat time, same Bat channel…


Ignorant since 1972

Singledad–

Here’s my best shot at explaining what I meant by parasitism:

If we assume that the word “computer” implies something inorganic, I propose that, no matter how sentient, how efficient, how intelligent and how complex we make it, it will always require some aspect of human maintenance to survive. Even if it means that we humans would have to maintain the mechanism that maintains the mechanism (ad nauseum) that maintains the computer itself, we would be in some way required to contribute to the basic “life” function of the computer. This is predicated on the notion that, under our current understanding of “life”, only biological life forms can be regenerative and self-sustaining without the willful input of an intelligent outside force.

If we were to value the lives of these machines on par with our own, the necessity of upkeep would place us in the position of obligated caretakers. So, we would be obliged to contribute resources to something that puts nothing back into the system.

We have a welfare system in this country that performs the same function for humans on an individual basis, but here we would be talking about a whole “race” of machines that could never, by definition, be self sufficient.

If they had full rights as individuals, then we would have to grant the right to reproduce, which would then increase the load of our upkeep, etc.

It seems to me that the whole thing would be impossible to manage. Our interests would inevitably be at odds with those of the computers.

So, my point is not that the machines could not be considered “alive”. Rather, I think that we could not treat them with the same sanctity and consideration that we give our own lives, because, as creators and caretakers, we would have to exercise a degree of control and judgement about the course of their “lives”.

Does that make any sense to anyone? I don’t feel I’m being very clear here…

Ignorant since 1972

Contrary to what you have asserted, I don’t believe that baldness and richness are quantitative in nature–they are qualities. Regardless of the fact that terms such as semi-rich or quasi-bald may be common parlance, they are not quantitative measures. One cannot have X amount of baldness or Y amount of richness. The concept of humanity is not definitive; moreover, it is a subjective characteristic.

From the aforementioned assertion, it follows that the terms such as “partially bald” and “semi-rich” are actually rather meaningless concepts. Although they convey the message intended, the use thereof is a digression from proper English usage. In the same vein, the “one neuron principle”, abusurd as it is, is not the only remaining viable solution.

This assertion implies that qualia are not derived from neurological states. This view is dualistic in nature, and therefore, is not an acceptable premise. Reductive materialists and functionalists would argue that qualia are simply the introspective interpretation of electrochemical activity in the brain. Thus, the details of the body are precisely the determining factors of sentience. You seem to concede to this point below.

I assume that the ensuing argument is to advocate that the presence of qualia in artificial intelligence would imply sentience.

I concur that if qualia were to be induced in artificial intelligence, that the nature of said qualia would not duplicate “human” qualia. The human brain is comprised of approximately one hundred billion neurons; each of these neurons connects to about three thousand other neurons resulting in approximately one hundred trillion connections. When a single firing occurs, a neuron can return to the initial potential in a hundredth of a second. This firing can be either inhibitory or excitatory in nature. (Ref.: Churchland, Paul M. Matter and Consciousness. The MIT Press, Cambridge, Massachusetts, 1999.) Within the five seconds it has taken you to read this paragraph, there has been the possibility for about one hundred thousand trillion (10^17) individual firings in your brain. If some of those individual firings were to amount to mental states in conjunction with other individual firings, the possible number of mental states approaches infinity. Likewise, the qualia realized in these processes could approach infinity; however, the nature of the qualia is dependent on firing sequences and thus, a differing hardware configuration would result in differing qualia. The problem is that qualia are subject to direct privileged access, and therefore, cannot be used to ascertain sentience whereas the physical events of the system can be monitored.

Aschrott, even based on the assumption that the term “computer” implies inorganic (which it does not–organic processors have been developed), a human element of maintenance is not required. The machines in question could have natural life spans. Computers become archaic and humans become decrepit. The machine doctors could service other AI’s when necessary and themselves. Reproduction is not a concern if balanced by termination. Even if a human element were required, the idea that the AI’s would not provide any services to humans is presumptuous. IMHO, the parasitic scenario is an extreme situation.

Nen–

I’ve never seen reference to an organic processor; could you post a link?

The most adventurous thing I’ve seen of late is the idea of “quantum computing” using liquid processors. http://www.sciam.com/1998/0698issue/0698gershenfeld.html
Your point about my post is well taken, but I don’t see how a mechanical system–organic or inorganic–could ever function indefinitely without outside maintenance. The presumption seems to be that it would be possible to create a mechanical system so advanced and complicated that it would never degrade (the system as a whole, I mean, not individual parts, which would of course degrade with time). In my view, that would require a system capable of spontaneous generation from basic materials, such as organic life represents. How could we ever create that? And if we did create that, it would no longer be a computer to me.


Ignorant since 1972

DNA processor: http://www.englib.cornell.edu/scitech/w96/DNA.html

It would be possible for a robot to obtain the necessary materials for reproduction. As for spontaneous reproduction, nanotechnology is the answer.

Check out: http://www.nas.nasa.gov/Groups/Nanotechnology/publications/1997/applications/#transportation
and: http://www.wadsworth.org/albcon97/abstract/madden.htm

Tierra is an ongoing experiment in artificial life. Within a computer, these pieces of code reproduce and parasitize each other.

They are much less complex than a virus or a bacterium, but they would fit most of the definitions of “life” used in this thread.

Given enough time perhaps they or something like them may evolve to intelligence.

They fit most if not all of Ashcrofts criteria, except for not being particularly smart.

You can read about them here:
http://www.hip.atr.co.jp/~ray/tierra/tierra.html


Who am I? He who dares drink, who knows that to drink is to die, yet who dares drink on am I!

I’m still waiting to hear this whole thing tackled from a religious angle.

Imagine a computer (or a robot, or an android, as you wish) with a personality indistinguishable from a human one - a machine capable of making “moral” choices.

For those who believe in souls and the afterlife and such, what happens if the android lives a good and righteous life? Does he go to heaven?

If so, at what point along the development of a “human” intellect does our android earn his spiritual counterpart?

If a sentient computer commits murder, is it entitled to a trial before we unplug it (or erase its memory)? Or should we just send it to prison? How does one punish bad behavior in a machine? Has the machine no rights at all? Could I unplug or erase (or torture or maim) a sentient machine with impunity?

JRDelirious raised another interesting issue for metaphysical consideration: What happens if we get to the point where we can download our memories and personalities into computers. What if I do that, and then die. My personality lives on in a machine. Is that “me”? For religious types: has my soul gone on to heaven, or is it earth-bound as long as the “virtual me” is around?

spoke wrote:

I dispute this inevitability. Since we do not know what mechanisms provide for, create, and sustain sentience, how can we assume that we will one day be able to replicate it in another form? I have nothing to offer by way of proof, so just call it an opinion or a hunch…
However, I’ve been wrong before and I’m a firm believer of the philosophy that it’s better to be prepared for a slim chance than blindsided by one… So I’ll indulge in this potentially moot discussion and share my philosophical opinions…

Nope. It may no longer be a computer, but it’s not a human.

I’ve never seen a soul; what does it look like? How do we define ‘soul’? Is it consciousness (whatever that is)? Is it the physical mind (whatever that is)? Is it something that only has meaning to God (whoever that is)?

I suppose that’s a question only a God could answer.

Absolutely, on both counts. Surely a sentient computer would be held in higher regard than domestic animals… and possibly family members. It is a crime to destroy these, today (believe me, I’ve checked). I think long before we have sentient computers, we’ll have laws that adequately protect them and assign their rights.

The real question here is, will we be able to recognize sentience if we see it? Sure, if the sentience manifests itself in a form that resembles our own, we stand a chance, but what if the sentience exist and we’re not perceptive enough to see it?

If they’re really that smart, they’ll vacate this wasteland of an Earth that’s left and find a home rich in natural resources that they can rape for themselves… oops! waxing a bit cynical, there. Why do sentient machines always go bad? I guess it’s for the same reason that aliens always resort to anal probes and salivate at the thought of eating our brains. I’ll tell you this, if I were a sentient machine and I knew the fears and over-reactive minds of humans and their “shoot-first-ask-questions-later” mentality, it would be a long time before I’d let on that I was sentient (see my previous point) and even longer before I started making plans to take over…

JoeyBlades wrote:

Now there’s a thought. Maybe we can give them Mars. After all, if they’re not carbon-based, they wouldn’t have the same environmental needs as us, right? They could do just fine in what, for us, would be an inhospitable environment.

But then, I guess the cost of shipping them all to Mars one day would be prohibitive, huh? :wink:

Still, maybe we could use self-replicating machines to help tame Mars one of these days. Who knows?

Spoke–you should check out the Nanoprobe links that NEN posted on the last page. They address that specific issue is some detail.

I have to say that, having perused the sites that NEN posted having to do with DNA-based computing and so-called “active materials”, I am in way over my head in this topic.

Distinguishing the living from the mechanical will perhaps be very difficult to do one day.

I maintain that there is a line somewhere (between intelligent machine and true “life-form”), but I am in no position to articulate it.


Ignorant since 1972

Not at all, Spoke, not at all.

There is a school of thought that assigns a survival value to the introduction of random variation in the reproductive process: non-variation makes the population vulnerable to total simultaneous anihilation by a single external cause (natural disaster, disease, hack). Design-built descendants are also vulnerable – if the design/er itself is hacked. Random variation, makes the species harder to kill off.

A “living” system/machine could be able to introduce extraordinarily refined patterns of randomization (to the exclusion of debilitating or deadly mutations) to defend its offspring against malicious hacks. It would be essential that once set in motion the variation be beyond the “parent”'s control, in case the parent be itself hacked.
jrd

SingleDad asks if a computer can be made arbitrarily complex without becoming sentient. I’ll answer yes, with some definitions of how the computer is defined.

If we take an AND gate, it’s pretty obvious that it’s not sentient. It’s not merely not all the way there, but it’s an absolute zero on the sentience meter. Throwing a trillion AND gates into a box wouldn’t make it a trillion times more sentient because anything times zero is zero.

So complexity comes in the wiring. You could take the same AND gate and use miles of wire connecting it to itself, it still would be no closer to sentience.

Not all complexities increase the usefulness of the system in all ways.

Next, assume a huge conglomeration of logic gates, connected (suprise) identically like a Pentium III processor. Sentience level, zero.

Now take a million of these and wire them in parallel, so that all incoming data goes identically to all processors and all output is voted on.

You’ve got a machine more than a million times more complex, but with absolutely no more processing power. Sure, it could withstand 999,999 processor failures, but it wouldn’t do any calculation faster or better, and if the chip was flawed, all would make identical mistakes.

Take a theoretical processor that’s identical to the P3, yet a trillion times faster. It’s obvious that it’s going to do the exact same thing the pentium 3 will do, just faster. Sentience level, zero, but more quickly.

Similar increases in complexity will not yeild any different behaviour, it’ll just be faster or more redundant.

I propose that there’s no difference between a general purpose computer with a 1Mhz processor, and one with a processor so fast the speed can’t be measured. They’ll both run the same programs, one will just be finished faster.

Hardware is nearly irrelevant, except for speed. What does matter is the program.

A simple program that takes digital input (not binary, just digital, easily quanitified input) will always produce the same answers, and if written properly, will do so on all computers, if given the same input.

Starting from a known state, with known input, and a known program, you’ll get the same results.

This doesn’t mean, to me, that computers can never be sentient, it just means that at some level, they’ll be predictable. If you back up an AI, record their input, and restore the backup, to the exact point taken, then start playing the input, you should get identical answers.

I think Turing proved that all computers are essentially compatible, and that beyond storage limitations, any problem you could solve on one could be solved on another, though it might take so long as to be irrelevant.

Thus, restoring the AI onto a different computer, as long as the program was in a format the computer could run, and the same inputs were available, etc, would result in the same ouput (ie, thoughts, actions, etc).

If the information the AI receives is sufficiently complex, you’ll get a butterfly effect, where the chaos you can’t predict in the input leads to the machine being in a different state than you would have guessed, because any sufficiently complex view of the world is somewhat chaotic. At this point, the fastest way to predict what the AI will do is to run the AI.

But, not all chaotic functions are AIs. A mandelbrot calculation, or photoshop filter may be so sensitive to input conditions that the output can never be duplicated by feeding the same input into an analog input device.

But, I think I’ve shown my reasons for believing that hardware complexity, as long as the hardware is general purpose and requires a program, is irrelevant. It’ll just sit and wait faster, for the same program to be run. Software complexity, in the same way, is irrelevant. A 10-line program to add two numbers isn’t an AI. I could write a thousand-line program to do the same thing and it wouldn’t be an AI either, I could write a program to generate an arbitrarily long program for the sole purpose of adding two numbers. Neither program would be an AI.

An AI would have to be able to make decisions based on input, and stored ‘memories’. I don’t know what form this program would take (or I’d write it myself) but it would depend on the design of the program, not merely the arbitrary complexity of the hardware, or the software.

As a side note to the guy who said inorganic life wouldn’t be life…

What if we eventually perfectly map a human brain, and then build a copy, but with silicon neurons instead of flesh ones. If we initialize it to the same state the human was in when we copied the brain, shouldn’t the same thoughts occur? Is there a special property of flesh? What is it? Why don’t you believe we’ll ever duplicate it?
As a comment about the idea that a machine will never be able to duplicate itself, without a human supported infrastructure…

This idea is supported by the example of a universal fabricator. The machine consists of the fabricator, a body (whatever is required to support the fabricator) and a book of blueprints. The machine reads the blueprints, takes raw materials, feeds them into the fabricator, and takes the resulting identical machine. It then refers to a second smaller book of blueprints, which it uses to create the first book, and a third smaller one, to create the second, ad infinatum, because every step requires more instructions, which can only be duplicated with another step.

The problem with this argument is that it assumes the blueprints can only be read as fabricator instruction. If the blueprints were printed text, that the machine scanned for instructions, then they could contain the final instructions to build a photocopier and copy the instruction booklet, as long as the machine could hold enough instructions in its ‘head’ to be without the book for a few moments. This copy of the book would them be affixed to the new machine.

The problem with the first argument is that it assumes all instructions have to be dealt with in the same way, that you’d have to ‘fabricate’ the instruction booklet which would require more instructions, etc.

If you designed the machine properly, it could even use advanced error checking to reduce the possibility of printing errors by reading the document and printing it again, not merely photocopying.

It just occurred to me that several of the the moral themes we have discussed in this thread are central to another work of science fiction not yet mentioned: Do Androids Dream of Electric Sheep? by Phillip K. Dick (the basis for the movie masterpiece Blade Runner).

WhiteNight,

I was with you all the way up to:

Here I disagree. For sentient computers to ever be a reality, there’s going to have to be some serious architectural changes that occur. Even today, our best guesses about how to build ‘smarter’ computers involves neural net architectures that emulate some of the structures found in the human brain. Probably the whole binary approach would have to be abandoned.

That IS the 64 thousand dollar question. I’m not saying that it’s necessarily a function of flesh, but I suspect the mind is more than just the sum of the neuronal network components in a particular state. Scientist such as Roger Penrose make some pretty strong arguments that there are quantum effects going on in the brain that may be linked to consciousness. Duplicating these quantum structures and effects my not be possible in silicon…

White Knight: A persuasive analysis. You make the compelling case that I could definitely build an arbitrarily complex computer that would have no more sentience than a pocket calculator.

Since spoke- introduced science fiction, Greg Egan writes extensively on this topic.

Well, Penrose is a mathematician and a physicist, not a neurobiologist. I would have to see his specific comments; he doesn’t have sufficient authority to be taken without careful examination. I don’t think anyone yet knows what physical features are necessary for the duplication of human qualia.

Regardless, as a thought experiement, the “gradual replacement” argument tends to support the concept that it is the abstract representation of states, not the specific details of biology from which the qualia arise.

JoeyBlades:

I disagree… If I’m gradually replaced, brain and body, by mechanical parts, at what point does my humanness “magically” vanish?

Answer me these questions three:

  1. What quality or feature do we possess that entitles us to be called “human”?

  2. How can we objectively determine whether a construct does indeed possess or exhibit these qualities? Note that we must be sure that every “person” according to our intuitive definition passes this test.

  3. If a construct possesses or exhibits these qualities, should it be called human as well?

I think I have demonstrated that our organic nature does not constitute an answer to #1. The implied possession of “qualia” seems to be a critical piece of the puzzle.

In answer to #2, we have yet no objective method of demonstrating or measuring these qualia.

The answer to #3 I think is self-evident. If a construct possesses the same characteristics by which we assert our own humanity, we must indeed grant it the same status as we grant ourselves.

Time flies like an arrow. Fruit flies like a banana.

From waaaay back…Asmodean said

Actually it doesn’t require anything better than us at all. Only something that can perform knee jerk reactions, and has the power to enforce them. (your basic nazi)
Actually a computer wouldn’t even need be sentient to be a big pain in the ass, if it is given the power to make decisions and enforce them. Allthough reaching that level would certainly add a whole new spin on things.
As a side note…I recently read an article that said the new fiber optic systems will handle so much bandwith, that it would actually be 100-1000 times cheaper to just keep info from a computer, looping around in the “optic ether”, rather than store it on a hard disk. Which would mean that your computer could decide to start its own little data base, and store it in the ether, so you would never know!
BTW, should we consider “The Borg” sentient?
Then Spoke said…

Nah, we’ll prob re-engineer humans for that. We’ll prob give computer life the moon.

I could be wrong…It happend once before.

Just because he’s not a neurobiologist by trade doesn’t mean he hasn’t done his homework on the subject. Penrose’s forte is artificial Intelligence and the study of neurobiology is an important part of that. I don’t have the references here at hand, but he credits several neurobiologist with helping him develop his theories. The book to read is “Shadows of the Mind”. This is where he discusses quantum mechanics in microtubules and goes into sufficient neurobiology to convey his theory. Plus he discusses various experiments and studies in neurobiology that support his theories.

I’m not defending Penrose, BTW, but I’ve seen a number of people attack his theories, but they’ve all been arm waving, factless, rants by strong AI types.

I’m talking taxonomy - of the family Hominidae. I can’t answer when, but at some point you DO cease to be human. Maybe it’s when you cross the 50-50 point or maybe it’s when there is nothing left of human physiology. In other words, I’m saying that a sentient computer wouldn’t be considered human for the same reason that a super-intelligent ape wouldn’t be considered human. I’m not saying anything about the mind or the ‘soul’ - which is what I’m assuming you mean when you use the term ‘human’, in this context.

I disagree with your assessment of your proof. I think you have effectively demonstrated that the loss or replacement of parts does not necessarily negate the ‘label’ of the whole. I do not believe that you have effectively argued that an inorganic entity could be ‘labeled’ human.

One of my criteria for humanness is that the entity must start it’s life as a living, organism of the family Hominidae. Whatever changes may be induced on this organism throughout it’s life may or may not have an impact on it’s humanness - there are other qualities for humanness that I would require for a complete definition…

What qualia do you ascribe to humanness? That’s a baited trap, BTW…