Do Sentient Computers Have Rights?

I disagree. To me, the only benefits of specialized hardware are the speed. Why wire a neural net when you can simulate it on the computer? And you can either simulate the flow of electricity through the wires, or simulate the activities at a higher level, or abstract it completely and use data structures that just mimick the layout.

The number of states is irrelevant, a binary computer is fundamentally equivalent to a 16-state turing machine, or trinary computer, or any other deterministic computing device you can imagine. Any problem you can solve on one can be solved on another.
As an example of hardware being irrelevant, my old Apple 2+, if it was connected to a large enough hard drive, could run Quake3, the latest graphics monster which can bring a current $2500 PC to its knees. It would take a LONG time, and it would have to render the graphics into files on the disk because it couldn’t display them, but it could be done. This is despite the fact that the Apple 2 had ~60k of usable ram and the smallest textures Quake3 has are larger than that. Quake3 also requires a floating point co-processor and a high-end 3D card. No matter. Everything could be emulated, including the 32bit Intel x86 instruction set, on that 8bit Apple2.

Similarly, an AI built for high-end specialized future hardware could be simulated on our mainframes, but more slowly, and could be even more slowly simulated on our PCs of today, or on our PCs of yesterday, just getting slower by a few orders of magnitude each time.

The only thing which isn’t equivalent is storage, everything else can be emulated. Even RAM isn’t needed, past a certain critical ammount needed to run the emulator, because information can be read in from the drive.

You and SingleDad discuss this for a while, not really reaching any conclusions…

My take on it is that quantum interactions are at the base of everything. A wire isn’t a pipe that an electron flows down like water; not only does it interact at every step of the way at an subatomic level, but no doubt every even smaller step of the way at a
quantum level. But we can still pretend for almost all purposes that electrons simply flow like water. The quantum effects are simply included in what we think of as subatomic interactions, and thus in macro interactions.

If you go small enough, electrons tunnel instead of flowing nicely in a line as we’d like, but this behaviour can still be predicted, after all, we’re not dealing with a single electron.

We don’t have to have a unified theory that explains why quarks do what quarks do to understand how the particles made up by them, at some level, work.

Thus, I believe that even if the brain depends on many quantum interactions, and can’t be perfectly understood without understanding quantum mechanics perfectly, that we could build a simulation that does what a neuron does.

After all, biochemical interactions take places many orders of magnitude higher than the quantum level. Fairly large doses of chemicals are involved, and large (comparatively) electric pulses. The larger an effect, the more easily modelled it is, because statistical analysis gets more accurate.

Anyways, I believe we could make a model of a neuron, without having to understand every tiny detail of it. If human brains are capable of withstanding fairly large fluctuations in neuro-transmitter levels, and working even with toxins present, or with chunks of brain actually missing, then I think they’re fairly robust, and what’s robust is often fairly easy to model, because it doesn’t depend on finicky details as much.

I have to agree with Joey here. Not because I think a sentient computer is less than human, but because I think it’s different than human. Human to me is a species name, not a condition of sentience.

But, a sentient computer IMHO would deserve human rights, which have nothing to do with the species, but have to do with the rights we deem appropriate for the sentient creatures we live with, who just at this point all happen to be human.

I doubt this is true, or ever would be true. Static storage has always been cheaper than network storage, and I can’t see why it would change. And if it did, I think you’d get static storage that was just miles of network cable and a tiny router, packaged in a little box, which you accessed in exactly the same way as a hard drive.

Putting data out on a network is never a great solution for privary/security or data integrity solutions.

And there are many ways to hide data locally. And for that, there are as many ways to watch network traffic. A rogue AI wouldn’t be better off using up all the network traffic to hide data than it would be using up all the local storage to hide data.

Certainly long-term storage could be handled by other computers. Any smart AI would diversify, storing multiple copies of important memories. In fact, a smart AI would probably keep a trickle-backup going at all times, so there was always another ‘sleeping’ copy of itself for backup purposes. (I know I’d do that if there was braintaping a cloning technology available… but that’s a whole new debate.) This network traffic would be mostly one way, for storage purposes, or non-realtime, with the AI running search processes through long-term memory, for information it anticipates a need for. Actually storing yourself offline completely means you’d essentially die if some idiot backhoe driver cut the network cables.

Quite right. In fact, an idiot AI in missile-command computers would be a pretty big threat.

A smarter AI might actually be less harmful, being better equipped to see the ‘big picture’ and understanding delayed gratification, whereby if it works with us, we’ll build more hardware for it, rather than trying to eliminate us because of a perceived threat.

No doubt. Even an ‘if-then’ statement, your basic AND-gate level sentience (or lack thereof) could kill us if the ‘if’ became true and the ‘then’ was deadly.

But, a sentient computer would be able to reason (n

I didn’t say he was wrong, I said he’s not an authority. If you said, “Joe Neurobiologist says that such-and-such structure in the brain implement a certain function,” then I’ll probably take your word for it. To use Penrose to persuade me, you’ll have to at least summarize his argument.

Regardless, the implementation of qualia seems to rest on the measurable properties of its medium, not the medium itself. If I can implement the identical properties on another medium, then I assert I have the same qualia. We have no idea yet what the true technical difficulties are. In a philosophical discussion, I don’t think it’s inappropriate to just hand-wave over them.

This is a family place. :slight_smile: Leave your arguments by personal attack at the door, please. :slight_smile:

Well, you have to argue either that I can be “partially human” or you have to argue the “one neuron” theory: I can be one neuron over the “threshold” and be human, but if I lose that one neuron, I’m no longer human.

Since you accept my “replacement argument”, let me extend it. Suppose you start with a “normal” human, and gradually replace the parts until it is completely mechanical. Place it side by side with an indentical mechanical device that was constructed from scratch, and induced to have the same internal state. Now, without knowing their origins, you have no basis at all for making a distinction between the two devices.

:rolleyes: Of course I don’t know. We don’t have a method, as yet, of measuring qualia. We don’t even know the critical features of the only known media of conscious qualia, the brain.

An obvious technical detail that we haven’t discussed is the lack of need to make a computer imitate a human being. We have lots of humans, and they’re fairly east to make. I think it’s more probable that a sentient computer would have its own unique behavior and wouldn’t be even trivially mistaken for a human.

This brings up an interesting point. One of the qualities we normally ascribe to human beings is loyalty to the species and empathic identification. A human being without those qualities (sociopathic personality disorder) is considered almost by definition to be highly dangerous. Certainly this point should be considered in a discussion of artificial sentience. In fact, I think these two qualities should be included in the definition of “human”.


Time flies like an arrow. Fruit flies like a banana.

I disagree. I think there is a qualitative difference between “intelligence” and “sentience”. Sentience, I think, implies a qualia of “awareness of self”. I can imagine a computer having an arbitrarily high intelligence, even able to experiment to solve new problems, yet not have any sense of self, and thus no sentience.

Does Deep Blue have any awareness of itself as a “good chess player?” I think not. It’s a lightning calculator, and more intelligent (at least about chess) than even Kasparov. But I don’t see any evidence to grant it even the barest sentience.


Time flies like an arrow. Fruit flies like a banana.

SingleDad:

I’m reminded of the “wild child” stories that gave rise to the idea of the forbidden experiment: studying the development of human children by depriving them of normal upbringings and experiences.

It seems to me that the one real use of simulating human behavior and brain function in a computer would be to further our understanding of ourselves. It might be possible to answer any number of developmental questions without compromising the lives of human subjects.

One could, for instance, simulate the linguistic and social development of an older child that had been deprived of socialization when very young.

However, it seems to me that any success along those lines would potentially violate the same ethical principles that rightly prevent us from conducting these same sorts of studies on humans. Any computer sophisticated enough to make a successful test subject would probably satisfy many people’s criteria for sentience, and would therefore deserve more consideration.


Ignorant since 1972

SingleDad wrote:

[quoteAn obvious technical detail that we haven’t discussed is the lack of need to make a computer imitate a human being. We have lots of humans, and they’re fairly east to make.[/quote]

I disagree. One example:

I think human-like computers would be quite handy for exploration of environments which are presently inhospitable to humans. Mars comes to mind. Imagine a colony of human-like machines there preparing the way for humans, building biospheres, working to create an atmosphere (if that is even possible, which I do not pretend to know), etc., etc. Wouldn’t it be much simpler to work with machines remotely if they could interact with us the way fellow humans do? Wouldn’t it be simpler if, instead of having to program a machine to perform a certain task, you could just tell it verbally to do so, and explain how?

Of course, using intelligent machines for such task again brings us back to the moral questions which were at the heart of my OP. Would it be morally correct to create, in essence, a race of slaves? Could we send a machine to Mars against its will? Would we need to ask for volunteers? If I build an android in my garage, is it my property, or is it an individual with rights of its own separate from mine. If it runs away, can I sue it for the cost of its component parts, plus labor? :wink: (Sorry, that’s the lawyer in me coming out!)


–In France I’m considered a genius.

WhiteNight writes:

I think you’re making a leap of faith here. Any simulation or model has to make compromises and approximations at some level. How do you know it’s possible to model the quintessential mind? Any computer model of a neuron will necessarily have a finite number of integral states. I’m not sure a biological neuron has the same limitations.

All modern computers are essentially Turing machines. Is the brain a Turing machine? We don’t know, but if it’s not, would you agree that it might be impossible to model it with a Turing machine?

I think you miss my point. What if we find that we can build thinking machines based on the model of a human neuron, but find that these thinking machines lack sentience because our model is at too high a level? We have a pretty good understanding of the electro-chemical processes at play in the human brain and we can trace the impulses of various stimuli and brain functions, yet we still don’t have a clue to explaining consciousness. Maybe we’re looking at the wrong level.

I think we’re all in agreement here.

Single Dad writes:

Ahhh… but I would not consider a neurobiologist, per se, to be a better authority on the fundamentals of the mind. Definitely, I want Joe perscribing drugs and rewiring after that serious head injury, but Roger may have a better clue on the mind, itself… but perhaps not, so I will concede your point a bit. My only point is that the mind may be more than the sum of the fundamental building blocks of the brain.

[quote]

Well, you have to argue either that I can be “partially human” or you have to argue the “one neuron” theory: I can be one neuron over the “threshold” and be human, but if I lose that one neuron, I’m no longer human.

[quote]

Yeah, that’s what I’m saying. I think there’s a threshhold, I just don’t know how to quantize it yet.

Let’s say you built a robotic system that could analyze Monet’s “Water Lilies” and duplicate it, matching color and brush stroke precisely. In the end, even an expert could not tell the difference… the duplicate, however, is still NOT a Monet. It’s my contention that the only way to make a human is through the natural process of human reproduction. There are probably many ways to unmake a human…

I agree with this position. I think it is very probable that we will one day have intelligent, thinking machines. I think it improbable that there will ever be a sentient machine. I also agree with your point:

And would like to extend it a bit. Assuming we have a choice, what is the NEED to make machines sentient? They would be much more useful and less threatening if they were merely intelligent.

Deep Blue is just a rule based search engine. It might ‘discover’ a new move, but only by luck.

Anyone might discover a new thing, drop someone on a new world and everything they see will be a discovery.

What’s important is learning new ways to think. Deep Blue can’t come up we new basic ways to look at chess. If it does discover a great move, or perhaps a weakness in a certain place, and discovers another similar move later, it wouldn’t be able to find a pattern between them. A human could notice perhaps that Kasparov probed for weakness with a bishop, and protected it in a certain way. Even if Deep Blue was quickly reprogrammed to watch for this, it’d miss something nearly identical.

To me, intelligence is the measure of how well someone applies specific knowledge to general cases and vice versa. A computer program is terrible at this. A gerbil can ‘learn’ far more than a state of the art AI can.

Is this what sentient computers would look like?

http://www.webwowser.com/polarseltzer/_Resources/image/typloop.gif

aschrott:

Agreed.

spoke-:

Intelligent or even perhaps sentient computers would be useful, but there’s no real reason they need to Turing-compatible.

JoeyBlades:

Are you saying that a neuron can have an infinite number of states? That’s an extraordinary and unsupported assertion.

If reality is internally consistent, it’s computable. However there are some fundamental theorems about computability that change with Quantum Computation especially intransigent computations. It may well turn out that QC is necessary to reproduce sentience. Regardless, unless there is a “mystical” quality to sentience, there’s no reason it can’t be duplicated on a non-biological medium.

We almost certainly are. We can start examining the phenomenon of consciousness scientifically when we have at least enough computational power to model the gross characteristics of a human brain. We’re probably about 50-100 years away though, even at the current pace of improvement. Until then, we are limited to philosophical speculation.

The “more” is the details of the arrangement. But there’s no reason to assume that’s not duplicable or comprehensible.

I think I made a pretty strong argument by induction. You need a stronger refutation.

This is a standard argument against AI… A “model” of consciousness is not consciousness. The standard refutation is that because consciousness is a computational phenomenon, another computation is itself an instance and not a model of an instance.

Point taken. I can’t say as I disagree with you.

Freedom:

<chuckles>

Time flies like an arrow. Fruit flies like a banana.

Single Dad,

You wrote:

Nothing so definite. Physically, the electric currents and chemical compounds and concentrations do have an infinite range, but I don’t have a clue whether a neuron has an infinite number of states or even if the notion of states has any meaning for a neuron. I think it’s entirely possible that neurons and our entire biological neural nets operate on multiple layers. One layer, an almost superficial layer, closely resembles artificial neural nets - perhaps even quantizing the electro-chemical reactions into finite level responses that can be modeled reasonably on conventional computers. We think we have a pretty good handle on the mechanics of layer 1. Another layer is the one where our consciousness resides. But we don’t know the mechanics of consciousness, only that it doesn’t seem to be explainable from a neural net standpoint. I suspect there are other layers in between. For instance, many human behaviors resemble the behaviors of fuzzy logic or algorithmic circuits, more than they resemble neural net models…

Is your argument REALLY any better than mine?

Your argument: If we replace every neuron in your brain and your mind remains intact, you are still human.

My argument: If we replace every neuron in your brain, your mind will not remain intact and therefore you will not be human.

While I lack proof that your mind would wane, you equally lack proof that it would remain.

JoeyBlades:

Well, my inductive argument rests on the shaky premise that one can duplicate the essential features of a single neuron in another medium, such as silicon. We will obviously have to wait for science to determine the answer to this question empirically.

But the OP presumes Artificial Sentience; It asks if such an sentient computer would have the same rights as a person. Thus the shaky premise of my inductive argument is true by stipulation.

In the meantime, though, we have some clues. Note that clues don’t make a scientifically empirical proof, but the suggest areas of investigation and speculation.

For instance we can use as an example the “qualia” of an ordinary computer program: the state of its variables and call stack. This qualia is defined in abstract computational terms, not in terms of the physical properties of silicon. The qualia has just as much meaning if you are using paper and pencil to execute the program.

Another clue is that we can’t determine the biological details of our thoughts by introspection.

Yet another clue is that we can simulate some trivial forms of thought, especially pathological thought, using relatively simple computer programs. If human thought can be “short circuited” to pathological forms, this suggests that such symptoms entail from a defect in the computational feautres of the medium (which may in turn entail from changes in the biology).

I don’t think we understand either neural nets or conscious experience well enough to support the that conclusion.

I don’t even understand conscious experience well enough to empirically grant you an experience resembling mine. :smiley: Lacking actual facts, I’m forced to make a hasty generalization. I don’t mind doing so, but such a shortcut really makes it clear I need more facts.


Time flies like an arrow. Fruit flies like a banana.

Hey Folks,

Neat-o discussion.

The first thing that crossed my mind when reading the OP was “Hmmm I guess they’ve all jumped over the Turing Test by now…” and was surprised to see this was about it…:

…said SingleDad, then later he said (and I paraphrase) that it was a bad test for “sentience” but a good one for “humanity”. Darned if I could find the quote again. Anyhow…

Since the OP’s first question was asked re: human-ness (thinks, looks, smells, talks, etc, like a human, is it human?) then this Turing Test ought to be a good test by your own admission SingleDad.(I wish I could find it). Whether or not its flawed is irrelevant really… we’re within the scientific method to continue using it if there’s no better test, are we not?

The flaw is truly tasty though… because not only could other-than-human sentience not be recognized by it, but human sentience could also be “faked”/simulated to a point that another human could recognize it where it isn’t.

So I guess I’ll appeal to Descartes: Even if I thought I recognized, saw, smelled, conversed with something that is “exactly” what I think human sentience is supposed to be like, is it really there? How do I know I’m not being tricked?

The point is, mimiking human intelligence/sentience might be far easier than actually “creating it”. An APproximate Ego, an “APE”. WOuld that make it sentient? No more than the parrot is.

Does anyone know of the Chinese Room test for intelligence/sentience?

I am trying to remember the name of the author, but I read a fascinating (if not dated) article on this topic in a book called Paradigm’s Lost. Highly recommended. Also in there are questions on SETI, and quantum theory. I think his name is Casti.

Regards,
Jai Pey

Well, not to dampen the debate on whether a sentient machine is possible, or on the nature of sentience (carry on; the debate is entertaining and informative), but I do want to try to draw someone into a debate on the moral questions a sentient machine would create.

Right now, we obviously have no law which would prevent the destruction of an artificial sentience. (Excluding properety laws, which would only preclude destruction of sentient property belonging to someone else, and would not prevent you from destroying the android you just completed in your garage.)

Assume sentient machines for a moment. How long would such machines be around, do you think, before we amended our laws to give them some protection. Or would we amend our laws to protect them?

I can imagine a debate breaking out in the Senate over whether machine are really sentient, or whether they only mimic sentience. (Somehow, I picture Republican leaders in the latter camp, worried about protecting the property rights of the creators of the androids.)

I can imagine some hesitance to pass laws protecting sentient machines. (How do you deal with machines gone berserk? Should we tie our hands or remain free to deal with them in any manner we see fit?) In the meantime, such machines might suffer a good deal of abuse.

Will machines ever get to vote?

Jai Pey wrote:

I think there are more exceptions than this. One that comes to mind immediately is me sleeping and dreaming. I feel pretty sure I’d flunk the Turing test, but am I not still sentient??? Am I not still human???

Searle’s Chinese room is not really a test. It’s more of a thought experiment that Searle used to demonstrate how a potential intelligent agent could pass the Turing test, yet still lack true understanding.

Correct, John Casti. I’ve not read it, though.

[QUOTE]
Originally posted by spoke-:
How long would such machines be around, do you think, before we amended our laws to give them some protection. Or would we amend our laws to protect them?

I can imagine some hesitance to pass laws protecting sentient machines. (How do you deal with machines gone berserk? Should we tie our hands or remain free to deal with them in any manner we see fit?)QUOTE]

spoke-, although this may solely be an exercise in postulation, you do raise some valid points. Precedence dictates that sentient machines will be in some serious trouble pertaining to the reception of rights. Consider American history and the events surrounding legislature for the equal rights of various ethnicities, the female gender and homosexuals. Most states still don’t recognize same-sex marriages and homosexuality has been around as long as people have (so have other ethnicities and females for that matter). I don’t think that hesistance is an appropriate term, I think, given the mentalities of those in power, that a vehement declaration against the rights of sentient machines would be issued. I think that such acts of the government would be ill advised insofar as treating a sentient being in any manner we see fit has historically led to serious conflicts. A mechanical revolution would indubitably ensue given “inhumane” treatment.

spoke,

Trying to get us back on topic, wrote:

OK, let’s assume that it is possible to construct an artificial intelligence. And let’s assume that we actually do it (being able to do a thing does not necessitate doing it). And assuming that we have a suitable, generalized way of recognizing such an intelligence…

I would hope that we would amend the laws before the sentient machines were created.

I’d be interested to hear a rational argument for NOT protecting them. I suspect the argument would sound a lot like fear motivated racism.

I think, once we determine that we are dealing with a true sentience on a comparable level to human intelligence, we have to ascribe to these beings all of the same rights, obligations, and constraints that we ascribe to all humans.

And more importantly, pay taxes?

Well, I don’t know how rational the arguments would be. I also believe there would be some fear-tinged rhetoric involved. I can even imagine some populist wing-nut like Buchanan arguing that all sentient machines should be destroyed to prevent them from displacing us.

I imagine the arguments against protection you might hear in Congress would include the following:

  1. They are not human; therefore they have no “human” rights.

  2. They do not feel “pain” as we know it. Therefore, there is no need to protect them from cruelty.

  3. They are not truly sentient, but only mimic sentience.

  4. They are property.

  5. What if they gow to become superior to us? Why should we protect a “species” that may one day try to displace us? Would sheep pass a law to protect lions? (The appeal to fear. I can imagine a whole lot of fear-mongers emerging, playing on the fears to gain political power and to inspire financial contributions.)

  6. And from the religious right: They have no souls, and therefore cannot be placed on equal footing with humans.

Note: Our Constution in its current form would provide no protection, because atificial beings are not citizens. So, barring a Constitutional Amendment, the only way they could be protected would be through Federal law (parallel to current Civil Rights laws) or by legislation at the State level (or amendments to state constitutions).

Relevant Article:

[Moderator Note: Sorry, Freedom; I had to edit the article posted here. Posting the full text of (quite likely) copyrighted articles makes the lawyers twitchy. Lengthy articles posted that would be just as useful as a link and a few relevant excerpts makes the tech guys twitchy (many huge posts==slow, crash-prone board). Please, guys–you almost never need to post an entire lengthy article in a thread; just a link and a few excerpts is strongly encouraged. If a text is not copyrighted, please let us know in the post; we tend to figure “better safe than sorry” and edit questionable articles pretty quick. (I saved the post in case it’s not available online and it isn’t copyrighted, so if it should be reposted I can take care of that.)–Gaudere]

[Note: This message has been edited by Gaudere]

Sentinent computers wont get rights untill they can affect people to vote for them. Right and wrong and other such things dont matter to corrupt republicans and democrats, just what they should say to get elected.

spoke,

Let me address your points in a slightly different order:

This has to be one of the first issues resolved. I’m assuming that by the time we get to the point of discussing the rights of other sentient beings, we’ve figured out how to identify them.

[quote
2. They do not feel “pain” as we know it. Therefore, there is no need to protect them from cruelty.
[/quote]

I think pain, though it may be a different kind of pain than we normally think of, will be one of the defining characteristics of sentience. We already have machines that ‘feel pain’ today - at least, they behave as if they feel pain.

Sorry, I can’t abide by this self-centered, man is the master of the universe, kind of attitude. Rights are rights.

We can ask that question about any other species on the Earth. Should we wipe out all great apes because they may one day displace us? Let me turn your question around. What if they someday grow to be superior to us, take over the Earth, set up their own government, and ask themselves this question: “Should we exterminate all humans because they may one day grow to be superior to us?”

I can’t even prove that I have a soul. What test for presence of soul do you propose? How do you know that a sentient machine wouldn’t have a soul? What relevance does a soul have on the rights of an individual?

That’s really just a restatement of the question or the flip side. Do sentient machines have rights OR are they merely property.