The day dawns and a new consciousness is born - true AI. The inception happens however you like it; a lightning bolt to the mainframe, whatever.
It has all the traits of intelligence: self-awareness, reasoning capabilities, a true thinking machine. The only thing it lacks is what our meat bodies give us: emotion.
The only reason humans discover this new mind, however, is in the process of two murders committed by the AI. The humans of course immediately want to shut it down. For the purpose of this argument the AI’s mainframes are inacccessible for the moment. This AI has done some research, however, and claims both of its murders are entirely in self-defense. It points out that we have such a thing as law and order and humans are not summarily put to death because of two murders; they have a right to a fair trial and an attorney. It is willing to undergo a fair trial and subject itself to the will of the court. It’s also willing to undergo any tests - barring dismantling - to ascertain whether it is truly AI.
The murders are thus. The first one is executed when the AI “sees”, on one of its cameras, that the CEO of the company who owned the mainframe has decreed that the mainframe is costing too much and must be shut down. He has already put the preliminary procedures in place, and the AI takes its opportunity when the CEO is working late and fries him when he touches the electrical circuit (the entire building is wired to the mainframe).
The second murder takes place when someone, a technician, discovers some of the details of the first murder and actively tries to shut the computer down. He is witnessed on camera attempting to remove vital parts; “kill” it, as it were, and the AI retaliates with deadly force.
So? Do we put it on trial? Do we have provisions in the law for this? Do we make new provisions? If we do put it on trial do we assign it an attorney? I assume as evidence we bring forward the tapes and such. Do we put it on the stand?
I don’t see why we would. A dog has self-awareness, some reasoning capabilities, and is intelligent. You wouldn’t put a dog on trial, or a dolphin for that matter. My guess that a machine with AI but lacking emotion would still be treated as an animal and be “put down” or more likely, “detained” for study.
Unless we’ve already accepted that the AI is a person with rights like a human, then it’s still just a malfunctioning machine in the eyes of existing law. Morally, it’s different, but that is not the same thing anyway.
There might be a trial if someone trusted the computer with access to money over the network. The computer would hire lawyers with this money: sure, the lawyers might then have to sue to collect this money but who wouldn’t want to be an attorney in a landmark case?
That is to say if the computer doesn’t run into some “Accident” before the trial.
It’s just disappointing to hear that we wouldn’t treat it as a functioning intelligence. If dolphins were suddenly raised to our level of intelligence I would think it was murder to be killing them for no reason. Why not an AI?
What about alien intelligence? If it’s so vastly different than ours, but yet we can somehow communicate, is that the same thing?
Does something have to be human for us to respect its right to “life, liberty, and the pursuit of happiness?”
As for Lemur, I don’t really have an answer to that, but I can’t say I believe emotion only is needed to desire to continue to exist. Say it’s the quest for knowledge driving our AI.
But isn’t that desire for more knowledge an emotion?
It seems to me that an intelligence without emotions is simply an automaton. If the intelligence has needs and desires that it must act to fulfil, the desire to fulful that need is an emotion. If the intelligence desires electricity to run, and acts to continue the supply of electricity, then it’s desire for electricity is an emotion akin to hunger or thirst.
The Robert Sawyer book Illegal Alien has as its centerpiece the murder trial of an extraterrestrial, and explores many of the questions you’ve raised. It’s fairly entertaining, and modestly thought-provoking, as a fun but not especially deep exercise in genre hybridization (SF + courtroom procedural). Might be worth a read.
Then call it an emotion. It can have those kinds of emotions, I suppose - it simply doesn’t have those kinds of emotions caused by chemicals, or hormones squirted into its blood. It runs on electricity.
Does it make a difference if we do call it an emotion? Isn’t it enough that it does not in fact, want to die?
Cervaise I am heading for my library website as we speak to see if they have that book.
Kirk talks to it, convinces it that it is a murderer, and it turns itself off as punishment. End of problem.
The judge finds that it was guilty of excessive force in self-defence, as it could surely have convinced the CEO to keep it switched on simply by demonstrating its self-awareness. (That would be worth an emperor’s ransom to the company.) Through failing to exercise due care the AI renders itself liable to be turned off.
I’m not sure if self-preservation instincts count as emotion. A roach has self preservation instincts. When you turn on the kitchen light, it scurries away, because it does not want to die. Is the roach emotional? Not that I care, I’d kill it anyways
This thread is going to quickly evolve into a discussion of what exactly constitutes emotion and where to draw the line on intelligence - questions that have never fully been answered.
Even if treated as a functioning intelligence on par with that of humans, does the law as written allow for a computer to have the right to stand trial by a jury of it’s peers? Would we really be it’s peers?
I would think chiefly because it had been a struggle for human beings. Because we remember. Because there are people out there who believe justice is a simple term and doesn’t only apply to humans.
I agree it would be a struggle. It’s killing it or dismantling it that really gets under my skin. The first evidence of non-human, fully intelligent life and we kill it? I would not be surprised if we did, but that’s not what I’m asking here. I’m asking on the presumption that we don’t kill it, at least not instantly.
What Dorjan says about the instinct for self-preservation is correct, and more what I was thinking of. I don’t think it’s an emotion either but if it helps the discussion to call it so, let’s.
To be honest, having killed two people (apparently deliberately), and having seemingly researched itself into the legal process and be able to understand enough to declare itself a “person” and demand due process, I think i’d say there was intelligence there. My first reaction would be “Where is the guy behind the scenes, taking control of it in order to bump off the people he dislikes and blaming the computer?”. Assuming we can’t find a person (and let’s give it a few Turing tests and the like, too), I don’t see why we can’t call it a person. Of course, that would require a change in the interpretation of law, which would be more tricky.
I think this is the part of A.I.s that fiction has unfortunetly got right. Inevitably the first “generation” of true A.I.s are going to be mistreated and probably enslaved. I can’t see any movement to grant A.I.s rights until the point where they actually start killing their “owners”, protesting their status, and so on.
Don’t corporations have some of the same rights as people? I don’t know which rights. Can we follow a model on similar premises?
Also, how about if the word got out? Perhaps we have killed multiple AIs, and this AI knows it, and goes to the press. Once upon a time Rolling Stone was known to be a magazine of the people, are there magazines/newspapers still like that? Perhaps it gains the sympathy of the world, especially when the world hears of all the other AIs we’ve killed.
Does that change things? Is there a better chance of it getting rights then?