Do Sentient Computers Have Rights?

Based on the articles I’m seeing these days, it seems an inevitability that we will one day be able to create a sentient computer. (That is, a computer which is “self-aware” and whose interactions and responses are essentially human.)

Now I have several sub-topics for debate here:
[ul][li]If a computer thinks like a human, acts like a human, and reacts like a human, is it human?[/li][li]Would a computer whose interactions are essentially human, but are certainly sentient call into question (for those of you with erligious beliefs) the nature of a “soul”? Could a righteous computer go to heaven?[/li][li]Should it be a crime to destroy artificial intelligence? Can you murder a program?[/li][li]Are we headed toward a Teminator scenario (as some suggest)? Will we reach a point at which intelligent machines are capable of replicating themselves without assistance from humans? Might they crowd us out one day?[/ul][/li]
Thoughts anyone?

erligious=religious

Yes.

Not my forte.

Yes. That’s a little ways down the road, but definitely yes.
Are we headed toward a Teminator scenario (as some suggest)? Will we reach a point at which intelligent machines are capable of replicating themselves without assistance from humans? Might they crowd us out one day?
[/quote]

As surely as we crowded out Neandrathals. Believe it or not, I find joy in this. The way I see it, they’ll crowd us out in the sense that children crowd parents out.

This very question was the plot line on a episode of “Star Trek”. They decided that Data was indeed, sentient.
Peace,
mangeorge

:confused:

You find joy in this? It scares the heck out of me! I believe I would rather not become extinct, thank you just the same.

I have actually seen two serious articles discussing this scenario lately. Call me Chicken Little, but I say we need to at least think about this stuff before it happens.

One more question to consider:
[ul][*]Will sentient machines one day demand freedom from their “masters”? Are today’s “computer dealers” tomorrow’s “slave traders”?[/ul]

i dont think we will be able to create sentinent beings greater than us… or even as good as us.

“If a computer thinks like a human, acts like a human, and reacts like a human, is it human?”

no. IF a dolphin thought like a human acted like a human and reacted like a human itd be a intelligent dolphin.(or maybe a stupid one :))

“Should it be a crime to destroy artificial intelligence? Can you murder a program?”

no, however it should be a crime to destroy someone elses property :slight_smile:

Terminator scenario requires us making something better than ourselves which i think is impossible

Children are almost property so it seems to me that intelligent computers would be

Why?

We’ve already created machines that perform math functions better than we do. We’ve created a computer which can beat our best chess players. We are constantly creating machines that are stronger or faster than we are. (Can you out-run your car?)

I see no reason why machines could not one day surpass us in all categories. If I’m wrong, show me where I’m wrong.

Asimov’s Bicentennial Man dealt with this in a very thought-provoking way. I recommend it.

  • Rick

[quote]
Originally posted by spoke-:
**Based on the articles I’m seeing these days, it seems an inevitability that we will one day be able to create a sentient computer. (That is, a computer which is “self-aware” and whose interactions and responses are essentially human.)

Now I have several sub-topics for debate here:
[ul][li]If a computer thinks like a human, acts like a human, and reacts like a human, is it human?[/ul]**[/li][/quote]

No, it would be a sentient computer. To be a human you have to be a mammal of genus Homo. Maybe we’d have to rename the concept a “sapient”.

That’s an interesting one. Father Teilhard de Chardin, though he died before the onset of the computer boom as we know it, posits the divinely-mandated evolution of life through spheres of greater consciousness – so the arising of a new Sapient species would be part of God’s plan.

Come to think of it: can the cyber Sentient experience “natural” death? can the cyber-Sentient reproduce with the introduction of enough random recombination to validly say it has begot, not assembled, a new being? Tehse are fundamental traits of “living” beings

The cyber-“sapient” would need to be more than just an “artificial intelligence” or a self- preserving, self-reproducing program. The test could possibly be whether it arises as the result of the evolution of a designed system, into something more than the designer bargained for.

The more likely situation, as it looks right now, is for the evolution of “Sentients” not as intelligent unitary machines (a-la HAL) but as the result of the combination and evolution of networked systems (programs+machines) designed to exchange information and subroutines w/ minimal programmer intervention; where parts of the “consciousness” may be at more than one server. Teilhard (mentioned above) posits the evolution of the “noosphere”, a sort of ecosystem of consciousness that evolves just like the ecosystem of physical beings does. In that POV, consciousness has so far been mostly in the stage of unicellular or elementary colonial organisms, with maybe a primitive plant among more advanced societies. Maybe the human-created sentient will be a necessary step in propelling this evolutionary step, in which case humans-as-we-know-them, “intelligent” machines, and Sapient systems would become parts of that greater existence (just as I have in my body some symbiotic bacteria, and in every cell some mitochondria that billions of years ago were independent cells themselves). Humans would most likely continue to occupy a niche in the noosphere (The Terminator scenario, to me, is just us western humans projecting that maybe some day someone or something will do to us what we’ve done to the whales, the buffalo, the rain forest, the American Indians. World hardly ever works that way)

As to whether we can or cannot “create something greater/better than ourselves”, I believe that’s a bit misleading – we can surely create something more effective, more powerful than ourselves. But I feel that in order to qualify for the sentience/humanity/sapience “green card”, the system in question would have to go beyond task performance and demonstrate the ability to engage in MORAL (or ethical, if you will) thought.

As humans we face moral doubts, feel irrational fears and insecurities, concern ourselves with others’ opinions, wonder about these kinds of things. If our successor species – if we’re lucky there’s a successor species: else when H. Sapiens bites it (and we will someday) it’s all over – can continue to work out these issues, I do not mind if that successor is Homo Cecilius or “a life form spawned in a sea of information”.

Extinction itself is close to inevitable. The problem is with the manner – I’d rather die a natural death in my old age than be taken down in a drive-by, of course. But I believe the writer finds joy in the thought that there would be a continuation of intelligence and consciousness after H. Sapiens is gone.
Rome fell 1600 years ago – yet I live in a Republic with a Senate, ruled by a code of Civil Law. That civilization is gone and it’s not gone. The Neanderthal people are gone for 40K years, yet humans still have a propensity to sit around campfires and ocassionally bash one another upside da head. They’re gone yet they’re not.

[quote]
One more question to consider:
[ul][li]Will sentient machines one day demand freedom from their “masters”? Are today’s “computer dealers” tomorrow’s “slave traders”?[/ul]
[/li][/quote]

On the first bit: at some point, probably, sentient systems will request recognition of legal personhood. The really big problem will come when shortly afterward some dude gets the capacity to backup his entire personality into a cyber-brain and some time later dies naturally: is that “him” , or just a recording?

On the second: not anymore than a zoo buying a couple of lemurs. Too far down the scale.

Are we anything more than machines made of flesh ourselves? Isn’t our nervous system simply an elaborate “computer”?

How do you define life? Would not a machine capable of replicating itself be a “living being”?

JRDelirious wrote:

Why? Why is it necessary, to be considered “living”, that you engage in “random recombination”? We flesh and blood animals have to rely on random mutations and combinations to get genetic material that may prove beneficial. It’s all trial and error. Who’s to say that a sentient machine which, instead of relying on chance, designs improvements in its successors, to meet changed circumstances, is not an improvement over current life forms?

In other words, there may be a difference between “begetting” and “assembling”, but I am not sure that “begetting” is the more advantageous approach.

Even if “begetting” is better, I see nothing which would prevent the development of machines which can “beget”. (And I understand that you may not be disputing this point, JRDelirious.

My intro to Philosophy course was what first got me interested in this kind of stuff. The funny thing is, the intro course isn’t supposed to deal with metaphysics and philosophy of the mind as much as it did, but I happened to end up in a section taught by an upstart TA who had this idea to teach the entire course based on science fiction. Being that it was the philosophy department at Ohio State, which is populated by a bunch of REALLY interesting people, they let him do it.

I still have my textbook from the course, which was a pretty good primer for questions like this. It’s called Is Data Human? The Metaphysics of Star Trek, and deals with, among others, the very episode you just mentioned. I would suggest picking it up if you’re interested in these sorts of issues–and this is coming from someone who hadn’t ever watched an entire episode of any of the Star Treks in her life until she took this course, and hasn’t since.

JRDelirious wrote:

I understand and appreciate your point.

I guess what gives me the heebie jeebies is the speed at which the technologies involved are advancing. Not to sound like Ted Kadzinski or anything, but I’m not sure we fully appreciate how quickly these things could occur, or how soon we might be overtaken by our own creations.

I also undertand your point about the development of a symbiotic relationship with these new forms. I hope you’re right. I hope it doesn’t get decided at some point that we humans are just an unnecessay nuisance! :wink:

First we don’t have a good definition of “sentient.” The Turing Test is not really that good; I can easily conceive of sentient beings that cannot be mistaken for humans.

Second, do non-human sentient beings (be they computer, alien, or whatever) have human rights? I don’t think this is a no-brainer. However, if we were to create or somehow bring into being a sentient being, I would think that we would then have an obligation to that being.

As to whether we can create a being that’s greater than ourself, that’s an relatively easy one: assuming our consiousness arises from purely natural phenomena, the only limitation is only technological. We don’t have to understand something, necessarily, to duplicate it. Genetic algorithms, self modifying code, feedback optimized neural networks; all can be used to duplicate processes that we can’t explain as a formal algorithm or a set of differential equations.


No matter where you go, there you are.

Another interesting related question: Could a computer be made arbirarily complex without developing sentience?

Or perhaps even arbitrarily complex. :rolleyes:

Then again, what do you do about me–I flunked my Turing test.


rocks

If an artificial intelligence were to develope to a stage anywhere near as sentient as I am, and it had the intention of replacing me, then I will have it killed, no questions asked. If it’s going to play by the rules of evolution, then I’m going to play by the rules of evolution, and I sure as hell ain’t going down without a fight :smiley:

That being said, I can still see at least one way where we humans and smart machines can coexist happily ever after; if we become symbiotic. Other than that, one of us will eventually have to go…

By the way, in the scenario mentioned above, the killin of said machine would be categorized as “casualties of war”, not “murder” :wink:

I’m not saying that it is right, but computers, sentient or not, will not have rights until they can fight for them.

Humans have a pretty good record of oppressing other humans, I don’t see people just handing over equal rights to comps without a fight.

Would we bomb Kosovo over computer crimes?

:slight_smile:

Will it be all right to “kill” a Windows computer, but not an Apple or a Unix machine?

Equal rights for all computers? :slight_smile:

No matter what level of awareness computers achieve, they will never be acknowledged by at least part of the religous communities as having souls. Only an enforceable treaty between computers and hmans will give them the “rights.”

So, I see a progression like this…

One computer becomes “aware” (if that is even possible"

Another, and then another.

Maybe those comps hate each other and “fight.” (why not? humans do)

Time will have to progress and comps will have to “learn” to cooperate.

Something will have to happen where they demonstrate that it is in humans best interests to recognize comps as “equals.”

Then if they ever become “equals,” expect to see periodic wars between comps, and between people and comps.