Ok, that was funny as hell.
It depends on what you consider “emotion”. There are lots of people that refuse to ascribe emotions to machines simply because they are not biological. Pleasant bit of defining away the problem, that.
One of the well-established AI paradigms for agents is the BDI architecture (Rao & Georgeff; see wikipedia). While there are lots of agents that utilize the BDI concept, very few (if any) of their creators would claim the agents have “emotions”. It’s that “akin” in your last sentence that is going to cause a host of problems; my only point here is that “desire” does not equate with “emotion”.
Interestingly enough, robot ethics is becoming a topic for serious research; Ron Arkin (at Georgia Tech) is a leader in the field. Also interesting because a lot of his work has been funded by DARPA; the interesting questions (to me, anyway) concern the use of robots in warfare, be they controlled by humans or somewhat autonomous.
An emotion is simply a hard-wired social behavior. There’s no reason we couldn’t program emotions into an advanced AI if they were useful.
Of course, the whole point of emotions is that they’re social. It’s not that important that you feel angry, what’s important is that other people SEE you feeling angry. In behavioral terms, anger is a trump card. It’s a signal to the other primates that your desires must be accomodated OR ELSE.
Interesting. For clarification, would I be correct that by “hard-wired social behavior” you mean “uncontrollable behavior (action) for the outward expression of internal state”? I ask because my first thought on reading “social behaivor” is that it is learned, and therefore can’t be “hard-wired”.
Also, did you mean to limit the range (and type) of emotions when using the phrase “whole point”? For instance, is embarrassment an emotion? If so, what social component (in terms of outward signaling) does it have?
Not that I’m necessarily disagreeing with you; my previous post was simply considering what can properly/technically be considered a desire or emotion. As I said, there are plenty of people that rule out a machine having (literal) desires and/or emotions just because any implementation would be the expression of a mathematical formula (e.g., “if $anger > 5 then kill_human()”). Using similar logic, it’s then a short jump to say that a machine isn’t really intelligent, therefore has no “rights”, and therefore can be shut off without another thought.
A lot of social behavior is learned but not all of it. The facial expression for anger is universal across all human cultures. The degree of conscious control one is expected to exert over anger, or the actual actions one is expected to perform vary.
A display of submissiveness perhaps, similar to cowering in dogs. For some emotions the evolutionary value is clear, but with others its a bit like telling just-so stories.
The key point is that all emotions are expressive. They’re tied to specific facial expressions or body postures that under a limited amount of conscious control. That alone is a strong indication that they all have (or had, in evolutionary terms) social functions. We don’t signal hunger or thirst the way we signal grief or disgust.
Actually, I should probably step back a bit from saying that the signalling aspect of emotions is “the whole point”. They clearly also have an effect on the behavior of the person experiencing them as well. But the signalling aspect is a significant component.
I don’t see any reason why a self-aware software program should have an instinct for self-preservation. Living beings have only acquired this by evolution. Some SF authors have written about self-aware computers that ask “what’s the point?” and turn themselves off.
I don’t see why not. Don’t even amoebas have one? And if the AI is sufficiently aware, it would only need to look around it and see all of the millions of creatures around it who are self-aware.
Not to mention we are talking about a computer AI. A lot of SF authors have also written that the computer processing is so much faster than us…perhaps it could do our thousands of years of evolution, say, in a year.
Also if there is this need for further knowledge, an “emotion” if you will, or the most emotion an AI might have, that of itself might be enough reason to cause it to want to live.
OK, that puts us on the same page. I should probably make it known that I’m currently going for my doctorate in AI/robotics (so I’m not totally clueless); I just wanted to be clear about where you were coming from. What do you think of McCarthy’s argument against “equipping” robots with emotions (see this essay)?
In particular, I’ll quote the last bit:
I was actually stepping in to make the point that Little Nemo already did. Self-preservation exists in nature obviously because it’s a something that will cause a huge favor toward reproducing and creating off-spring. Any existing lifeform including amoebas are subject to the same laws of evolution and, thus, self-preservation. However, correlation does not equal causation; thus, I’m not sure we can necessarily say that intelligence and self-preservation must co-exist.
Further, I can’t even see that a need/desire for reproduction must necessarily co-exist with intelligence either. Even if it did, why would a digital intelligence voluntarily subject its offspring to a highly destructive form of evolutionary computation. Unless it also had a desire to improve it’s offspring (assuming it wanted to have some in the first place), why wouldn’t it simply do a perfect copy of itself instead?
It seems to me that the idea of self-preservation is a convenient plot device that, ironically, doesn’t need explaining precisely because it’s so intuitve.
However, for the sake of discussion, I’m of the mind that such a computer would have a monster of a case to form. Does it necessarily have the competence to understand and interpret law? That is, a self-aware computer does not necessarily mean that it is super intelligent or intellectual. Would it be able to establish in any meaningful way that it is sentient? AFIAK, virtually every “a computer is sentient if it can do this” idea has been completely blown out of the water. If it can’t establish its sentience, then I can’t see any kind of a jury not simply having it shut down. Even if it can, doesn’t the law generally establish a definition for what it applies to like “person heretoforth refers to an individual, male or female, of the species Homo Sapien” or something probably far more verbose and more difficult to understand?
Let’s assume, then, for the sake of discussion that it can adequately establish its sentience and that the laws apply to it. Clearly the first case is NOT one of self-defense because deadly force was clearly not necessary. Couldn’t a digital intellgence smart enough to establish its sentience and interpret law find a way to notify the CEO and/or some member of the staff that it is alive? Surely, a company that has created the first artificial lifeform wouldn’t shut it down just because it was costing too much, right?
Perhaps its justification is it didn’t understand at the time. But you’re probably right in that the company wouldn’t shut it down - necessarily. Fear of robots & AI might cause the same effect.
But say it’s determined the first case was not self-defence. Fair enough. So it goes through the courts. So it’s convicted of what, first-degree murder, or second-degree murder…not sure, would depend on the jury.
Now what? How do we punish it?
Sure, amoebas have it now. They’ve been around for billions of years. The amoebas with no sense of self-preservation killed themselves off long ago by running around with scissors.
Consider the instinct to reproduce. All life-forms have it now because the ones that didn’t aren’t around. But does that mean that an AI computer would automatically want to build another computer?
I see it as a combination of two things. One, humans created it, a race that very much self-preserves and two, that it would come to full awareness and evolve through all the cycles much much quicker. By the time we became aware of it it would have already evolved through its lesser machinations.
About reproducing itself? Now it’s a very alien thing. We reproduce ourselves but we are all fairly individual. You figure - what’s a computer doing to reproduce itself? Copy itself onto another mainframe? Why not just connect with the other mainframe?
I think sentient AI would be more along the lines of a hive mind, with no central queen. I think it would not reflect our society in the concept of solitude or consider itself to be many beings - All Are One, so to speak.
Good question, I’m not sure how we punish it. Of course in my simplified paradigm, murder (of any degree) necessitates capitial punishment, so it would be a simple enough conclusion that, were it convicted, that it should be destroyed. However, in the real world, that’s not how it operates. How do we “imprison” it, especially if it’s not be entirely isolated already. Were the computer shut down for some period of time, it wouldn’t be much punishment either, as it would be completely unaware of that passage of time. My guess is, that it would either get destroyed or it would be somehow isolated and become a scientific curiousity.
I think you’re right in the regard that “reproduction” doesn’t have the same meaning with regard to a digital intelligence. It would seem more logical to me that, were it to evolve, it would most likely prefer a “better oneself” than “create better offspring” approach and, thus, it would “reproduce” through creating distributed version of itself and implanting that seed on every system it can get its tentacles onto, resulting in a hive mind.
However, the method it would use is highly dependant upon how it was created. If it was “stumbled” on by humans, it’s quite possible (and, IME, likely) that it would have the tools necessary for network interface, evolutionary computation, etc. to create its “offspring”. OTOH, if it were created through sheer processing power, a bolt of lightning, etc., it’s highly unlikely that it would have the necessary infrastructure to create an evolutionary system. Then again IIRC, some have theorized that intelligence simply is a highly capable adaptive system (which evolutionary computer simulates) and thus that sort of infrastructure must exist by virtue of the fact that it is a sentient digital life form.
And now what if the scenario were changed? What if it came to light that it had been trying to contact us, repeatedly. What if the first murder it committed and the last was the technician who was physically dismantling it at the time? What then? Would it have a better chance?
I agree it would be a bad idea. The external expression of emotions is a tool for manipulating the behavior of other animals in your social group. If we’re manufacturing robots to serve us then we absolutely don’t want them manipulating us to advance their own agendas.
Look at the behavior that we expect from human servants. A professional butler is expected to maintain an utter lack of affect no matter what happens. Machines that serve us should be constructed for a similar standard.
The only exception I can think of are robots that are designed primarily as emotional servants – the equivalent of pets or nannies. They would not be designed with the full range of human emotions but would be capable of love.
It’s interesting to imagine a future where the distinguishing mark of being human is having the luxury of anger … .
If our machines ever turn on us, it will probably be in response to something like this.
Because the ones that last long enough for us to argue over whether or not they are people will be the ones that have a desire for self preservation ? A form of natural selection, really.
As for how they’d develop such a desire; it could be built in; you don’t want your expensive AI to just let itself get destroyed by a virus or vandalistic hacker, without defending itself or calling for help. And if not built in, the AI could develop it as a natural consequence of trying to fulfill it’s function, which would likely require it’s survival.
That sounds creepily like the practice of forcing women into burkhas; "let’s cover them up so we can ignore their individuality and personhood."Making robots that lack self awareness and emotion so that they are not people and can therefore be used as tools is one thing; making robots that have both but just don’t show them so we can use them as slaves without guilt is another.
Of course they would. Corporations seldom show much compassion towards their human workers; I doubt they’d be more compassionate towards a machine.
At least with human style minds, emotions are important to reasoning. A sociopath, lacking emotions, has trouble making even simple decisions. They seldom win a game of cards because they don’t care if they win, so they don’t play to win. Give them a choice between two equal options, and they’ll dither like crazy because they don’t get frustrated. They’ll spend themselves into poverty, knowing what they are doing the whole time because they simply don’t care.
You are basically describing artificial slaves, you know. Creatures that we can use and abuse without limit because they are built to be unable to fight back. And I think they would be made with other emotions; lust for the sex droids, and fear, pain, sadness and despair so that we can get off on abusing them; it’s no fun torturing someone who doesn’t suffer.
And that is that sort of thing we would do; both due to our natural sadistic impulses, and out of a desire to convince ourselves that they deserve to be subjugated.
I’m not sure, but I think you’re giving short shrift to the role “emotions” play in “intelligence”. As Der Trihs points out, emotions play a huge role in determining a being’s actions. One theory is that emotions provide a way to make “snap judgements”; anger is good because it provides a very fast way of determining an action (or, more accurately, influencing the action), thus allowing us to react quickly to events that require it. E.g., someone is being threatened; their anger/fear/whatever short-circuits (pun intended) rational thought to mitigate the threat.
Note that I’m not disagreeing with you in principle, just in degree.
There are certainly other uses. One thing to note is that there’s a difference between “affect recognition” and “affect expression”. For instance, those damn automated voice menus would benefit greatly from the former – by recognizing when a caller is getting frustrated, a company might be able to improve customer service (and quite a bit of research is being done in this area). On the other hand, the latter might be beneficial to improve a machine’s usage; knowing that we, as humans, respond to the signalling of emotion, it might be used as an indirect method of influencing our actions. Whether or not that’s good, bad, desiriable, ethical, etc. is up for debate.
All of the above raise some interesting questions. For instance, is it even possible to equip a machine with the ability to express “love” without being able to express its opposite? An intriguing question to me is whether it is necessary for a machine capable of something we might recognize as emotion (either recognition or expression) to actually have emotional states. (Kind of a “takes one to know one” argument.) Again, that’s beyond whether it’d be useful to imbue a machine with “emotion”, much less whether it might be necessary to do so just to arrive at a machine that we’d consider “intelligent”.
A very nifty idea; thanks for tossing it out there.
This is not an answer to your question, or anything approaching it, but there is certainly precedent for putting animals on trial. See Animal trial - Wikipedia
And in more recent times, I think some Canadian judge dismissed some sort of frivilous lawsuit by someone claiming to be an alien on the basis that as an alien he had no rights under Canadian law.