Er…possible spoilers ahead, now that I think about it.
I caught a rerun of Batman: The Animated Series the other night—man, I forgot what a good show that was. Pretty gritty, compared to later DC shows—and towards the end of the episode, I couldn’t help but notice that something seemed “off”—then it clicked: Batman destroyed several android opponents. And these weren’t mindless drones, either. They were thinking, reasoning, talking beings. And he killed them.
This from a superhero famous for his at times bewildering “never kill” policy—which seems to only apply to organics.
That got me to thinking: a lot of the animated DC heroes don’t seem to have any qualms about taking advanced artificial life. A number of alien robots were destroyed in a battle by the Justice League; Superman didn’t seem upset about the death of an android he had been helping hide from a supervillain in a “Static Shock” crossover; and the Terry McGinnis Batman once went on an assassination mission against the titular star of The Zeta Project.
Heck, I think this even extends to a few of the comics—Alex Ross’ Kingdom Come, practically a love-letter to traditional superhero morality, at one point has Superman casually chatting about how he defeated a robot supervillain, then dismantled his brain, and scattered it across time and space. He even kept one chunk on the mantle of his apartment(!).
Now, I’m as pragmatic as it gets, and I’m up for using any loophole available to take out as many supervillains as possible—but I can’t help but notice the seeming hypocracy of the morals involved.
So, I was wondering: Are there any superheroes who won’t kill AIs, or at least shown some ethical concern about the practice? Anyone ever get jailed for roboticide?
I mean, I can’t be the only hopeless geek who’s thought about this…right?
I don’t think they exist, in the DCU. Superman, Batman, Green Arrow, even the Green Lanterns, who are constrained from killing by an outside force, or the Legion, whose code against killing is so strong that self-defence isn’t a valid defence, they all do it.
Hell, even the heres who ARE AIs - Red Tornado, or the Metal Men, for instance - do it.
Reddy, and Magnus robots, as well as several AI supervillains - Brainiac, the Killg%re, the Thinker - hell, even Robotman and Cyborg, who are partially organic, all give a pretty good justification for it, though. AIs are damned hard to kill permanently in the DCU - all of the above have been ‘killed’ multiple times, but managed to come back from it each time. Hell, it’s sort of Reddy’s schtick. (The fact that the same also applies to the popular organics can be ignored because the proportions - more AIs have been ‘killed’ and returned, proportionate to the number of truly sentient machines, than organics in the equivalent ratio.)
Marvel is the same way; it’s not just a DC problem. I’ve always found it rather offputting. I recall an issue of Alpha Flight where the guy who can telekinetically manipulate machinery ripped the insides of an android out through her mouth, and made some comment about how “any other machine that thinks it’s one up on us humans will get the same.” Just imagine the reaction if, say, Machine Man reached down some robot-smasher’s mouth and yanks out the guy’s innards, and commented that “Any human who thinks he’s one up on us machines will get the same !”
Androids and robotic opponents are there purely as a means to allow superheros to go ‘all out’ without compromising their surface integrity. I would imagine fewer people consider even sentient AI to be on the same level as a living being, even a living alien, and thus fewer people would have a problem with a robot being destroyed in nasty ways.
I saw the same phenomenon in the Samurai Jack cartoon - great show, but you can’t help noticing that of the thousands upon thousands of enemies Jack annihilates, every last one is a robot. Not one. Single. Meatbag gets mortally wounded by Jack’s sword. (I suppose there may have been exceptions, but not by a long shot. Jack engaged in wholesale robotic slaughter.)
This even turns up in the universe of Star Trek: The Next Generation. Because of Lore’s evil nature, and possibly because the Federation had not ruled on his status as a sentient being as it had done with his brother, Data, the crew of the Enterprise treats him with less regard than they do Data. In “DataLore,” he was beamed into space. Although Lore could survive for indefinite periods in the vacuum of space, this still had the effect of depriving him his liberty without due process. In “Descent, Part II,” they were more extreme with Lore, deactivating and disassembling him, again without the same due process they would have granted Data. Instead of giving Lore the same treatment they would have given Data or any organic life form (binding him over for trial, for example), they treated him as just another machine.
Exactly. It used to be that aliens were fair game, but now they are not (thanks to dolphins in tuna nets, I guess). Robots are forever non-human and ok to destroy. Ditto for slobs, blobs, globs and all other amoeboid enemies.
At least in “Datalore” they had the excuse that Data hadn’t yet been recognized as having rights (that happened in S2’s “The Measure of a Man”). But Lore’s disassembly in “Descent” seriously pissed me off. I’d love to see a NextGen novel in which Data is put on trial for it, because (assuming that Lore was destroyed when the Enterprise was) Data’s guilty of at least manslaughter.
Especially true since Lore had a functioning emotion chip. Data didn’t get his installed until Generations. So, Lore had feelings for Data, maybe even a kind of subverted love for him, almost certainly envy, while Data had none for Lore. His disassembly of Lore came across as cold-heartedly logical. Even a Vulcan might have argued that Lore had rights, by extension of the same ruling that granted them to Data.
Actually, wasn’t “The Measure of a Man” a challenge to Data’s already recongized, (if somewhat tentative and implicit) rights rather than the first time the issue came up? I’m pretty sure it was EU when Data actually says of that trial: “This is the third time my rights as a person have been the subject of debate - the first two being when I was first discovered by Starfleet, when I chose to apply to the Academy as a self-determining individual. I did not question the need, on those two occasions. But now, it seems like there will always be someone new wanting to take away the protections I have been granted.” But still, you’d hardly expect that he’d have been able to sign up for starfleet without some of the same controversy developing.
An episode script (which jibes with my memory but isn’t an actual “transcript”) notes that Maddox opposed Data’s entry into Starfleet on the grounds that he wasn’t sentient. I guess that implies that the sentience question was already established to a point. The hearing was about whether or not he was the property of Starfleet (“Would you allow the Enterprise computer to refuse an upgrade?”). The idea wasn’t developed any further in the series that I can recall.
For example, in the videogame world if you show a human getting killed that’s pretty much an automatic “Teen” rating. (Unless it’s really cartoony.) Do the same thing to a robot and you can get by with a “10 and up”. This is particularly important when you’re looking at products that are targeted at kids.
Buzz Lightyear destroys hordes of Emperor Zurg’s robot marauders without a second thought. It wouldn’t be quite as kiddy-friendly if he was whacking organic foes by the thousands. Blood, ichor or alien circulatory fluid isn’t G-rated in mass quantities.
Sorry… referring to the not-really-canon ST novels as ‘expanded universe’. Specifically, it would probably be the giant paperback ‘Metamorphosis’, which starts right after “Measure of a man” leaves off, (with Data and Riker arriving at Data’s welcome-back-to-Starfleet party,) and has Data become human, realize that he thus dooms the Federation, and go back in time to choose another path in his life. Fun read.
I always thought that the property question was very odd in context… even if Data had no rights of person, he had not been developed by starfleet, and I think that a claim that starfleet had salvage rights over him because they discovered him at the ruins of the colony would be tenuous… did they ever conduct a search for Soong’s legal heirs? Probably not, or they’d have found one Juliana Soong-Tainer or another running around, if not Soong himself. (Okay, that’s a cheep shot - Soong and Juliana were probably pretty hard to find back then.)
My contention, though, was that after letting Data follow through the forms of joining Starfleet as if he were an individual with rights of person, it’s very odd legalistically to change the rules in the middle and refuse him the same rights of resignation as any other officer. Remember the bit in ‘Encounter at Farpoint’ when Riker assumes, out loud, that Data’s rank is honorary, and Data corrects him adding that he graduated honors at the academy. He earned his way in, as I see it.
And - did Maddox have a true change of heart at the end of ‘Measure’? Obviously Data didn’t have to follow through on his threat to resign after all.
Maddox seemed to have been won over by Picard’s argument, somehwhat—at the end of the episode, after the court ruling, he pointedly says “he’s remarkable” about Data, when before, he’d been very insistant about calling him an “it.” A later episode even shows that Data and Maddox had kept up correspondance over cybernetics.
Kind of an interesting, related note on the “Would you allow the Enterprise computer to refuse an upgrade?” argument in that same episode—Nitpicker’s Guide author Phil Farrand wondered what, exactly, Picard or starfleet would do if the computer did refuse. Would they really just reformat or dismantle it without a second thought?
Well, later on in the series, Picard basically stood by and let the Enterprise computer complete an upgrade that was basically its own idea, not one that any Starfleet staff had tried to do. Not exactly the same thing, but gives you a kind of an idea.
And that’s a good point about Maddox being impressed with Data, and Data sending him a subspace letter later on. I do remember Data saying something along the lines of, “Keep working on this stuff. When you’ve got experimental results that convince me you’ll be capable of putting me back together, I’ll consent to your experiment.” So that might have been a factor in Maddox, and his admiral backer who originally sent Data’s marching orders, deciding that it would be better for Starfleet as a whole to leave Data on the Enterprise, instead of forcing him out of the service.
And also, as I seem to recall, the background of the case was that if Data had not been ‘property’, Starfleet could still reassign him at will, but could not force him to submit to a dangerous experiment, even if he stayed in the service. He’d been worried, though, that if he left the Enterprise and reported for ‘duty’ with Maddox, he’d get taken apart whether he consented or not, and it would be hard for his friends to prove that he hadn’t agreed, or raise legal charges on his behalf. Once Judge Louvois handed down a firm decision of ‘NOT property,’ that would’ve gotten a lot harder.