No deal. My policy has always been that we don’t make deals with space aliens.
Maybe when they start talking about YOUR subgroup of humanity, you’ll get it.
First they came for the Jews
and I did not speak out
because I was not a Jew.
Then they came for the Communists
and I did not speak out
because I was not a Communist.
Then they came for the trade unionists
and I did not speak out
because I was not a trade unionist.
Then they came for me
and there was no one left
to speak out for me.
Martin Niemöller
Well, aren’t they intentionally sacrificed? That 40,000 or so people will die on America’s roads next year is as predictable as the sunrise. We know it’s going to happen, but we don’t do much to change it more than marginally. We’ve chosen to accept this cost of our vehicular freedom. Sounds intentional to me.
Does anybody else out there hate hypothetical questions – and particularly this kind of hypothetical question? I find it idiotic to take a scenario that would never happen in a billion years, and ask people to form an ethical viewpoint on it. And particularly when the scenario is framed to be deliberately inflammatory. It’s like creating a fire just so you can pour gasoline on it. Maybe the ethical choice here is to sink stupid hypothetical questions like this one to the bottom of the sea.
Why? I agree that it would be idiotic to take this kind of hypothetical discussion, come up with a “correct” answer to it, and then apply that answer to real life situations. But what’s wrong with hypothetical discussions? Seems to me that anything that causes intellectual back-and-forth has to be a good thing.
Except that one of the interesting things about this hypthetical is that we’re not dealing with humans, we’re dealing with inscrutable aliens with inscrutable motives. We’ve learned in human history that if some bullying group says “please give us your (minority group)” or “please surrender (seemingly minor freedom)”, if we do give them what they want, they’ll almost certainly want more. But if these are truly alien aliens, ie, completely inscrutable as to motive and motivation, then I think it’s worth answering it without assuming that giving up 12.5% of the population (black or not) is just the first step down an inevitable road.
If the question was “you are the president of a small and weak country. A big and powerful country next door offers to give you (insert very valuable item here) in exchange for all of your firstborn sons, do you do it?”, then it’s a no-brainer not to do it, because they’ll still be your neighbors, and you’re just saying to them “we’re your bitch, please feel free to do with us as you like in the future”. But if the situation involves aliens, it’s qualitatively different.
Max, my response is that this whole hypothetical thing is posed as an ethical question, which supposes that ethics can even be usefully discussed when you’ve parted company with the real world. Given that, this whole thing looks to me like an exercise in making people mad for no good reason. Plus, I’ll add, I have an aversion to hypothetical questions which are so far out that you feel like a thoughtful, non-far-out answer is kind of pointless. YMMV, but I’m curious if other people agree with me.
True. But that doesn’t mean that the deliberate sacrifice is always automatically 100% wrong. It’s easy to dismiss “greater prosperity and health” as unnecessary luxury, but curing AIDS and Cancer and Alheizmers and Diabetes and Muscular Dystrophy and MS and Down’s Syndrome and learning to regrow lost limbs is nothing to sneeze at.
But as I pointed out a few posts back, it’s necessarily the case that we can apply the lessons of human history to dealing with aliens. (Actually, that’s an interesting meta-hypothetical. Aliens show up and offer humanity some random deal. Should we or should we not assume that we can use lessons from human history and society while trying to evaulate how to deal with these aliens? Anyhow, for the sake of this hypothetical, we can, if we wish, assume that these aliens are truly inscrutable but honest, and if they say “this is the deal we want, and then we’re leaving and you won’t see us for 500 billion of your years, regardless of what you say”, then the slippery slope is not an issue.)
Sometimes (particularly in the land of hypotheticals) the best option sucks ass. Suppose I’m the president, and the aliens show up, and put me in a room with two buttons and say “in one minute, if you haven’t pushed either button, we will destroy your nation entirely. If you push the first button, we will take all your black citizens away for a lifetime of cruel medical experiments. If you push the other button, we will take all of your children under the age of 3 away for a lifetime of cruel medical experiments”. (Assume, also, that I have reason to believe that this is a real choice, not some twilight zone style test where if I refuse to push a button, it will turn out that the initial threat was a lie.) In this case, I’d certainly push a button, because while having to choose between all toddlers and all black people is awful, either choice is clearly better than ALL people. So I’d push a button. Then I’d certainly have a darn good reason to consider killing myself, even though I wouldn’t be able to point to any better choice I could have made.
I agree that the correct answer to the initial question is “you can’t trust these aliens”. But it’s possible to dress it up a bit so that you can… we might have been dealing with these aliens for hundreds of years… every once in a while, they pop up, ask us for something random (ie, a gutenberg bible, or 100 fig newtons, or they want us to reunite the cast of Friends so they can attend a live taping) and offer us something valuable in reward, and every single time for hundreds of years they have been 100% scrupulously honest, but have remained totally incomprehensible that entire time with no pattern as to what they want when or why. To truly utterly completely alien beings, there may be no “morals” which make 100 fig newtons any different than millions of babies.
Fair enough… I just have fond memories of by far the best and most meaningful class I took in high school, a social studies class called War and Peace, in which, in addition to studying real life situations like My Lai and the Nuremberg Trials, we spent a lot of time having enlightening discussions such as “if you could push a button that would kill one random person on earth but end all war, would you do it?”. There may not be a correct answer to that question, and if there was, it would almost certainly never also be automatically the correct answer to any real life situation, but that doesn’t mean that discussing it didn’t make me a better person.
The movie Extreme Measujres with Gene Hackman and Hugh Grant poses the moral question more poignantly, without clouding the issue with race.
My answer is that the only person who you can sacrifice is yourself.
What if the Aliens were acid glue and the blacks were Hitler?
Question inspired by this classic thread.
Diogenes is right on the money here.
The immorality lies not in the percentages. It would be immoral to offer up even one person to this scheme.
Of course, if someone wanted to volunteer, that would be fine. But the idea that we could vote to exterminate (potentially) our fellow citizens is wrong.
That is what I argue.
Hey! It’s an advanced alien civilization. I’d volunteer to switch places with with one of the persons selected.
Didn’t you see “To Serve Man” (Twilight Zone)
Although I agree that it is immoral to force people to be taken by the aliens, I’m appauled by the general assumption among posters that being taken by the aliens is akin to extermination.
Consider, the aliens are offering an overwhelmingly generous transaction – unlimited satisfaction of energy needs for a quantity of biomass that could be replaced in a couple of generations. Does anyone disagree that, moral objections aside, this is a great deal for humanity?
Consider, with the technology to honor such a deal, the aliens could probably take what they are asking for without offering a deal.
The assumption should be that the request for 37 million people will lead to something positive – for the aliens, possibly for the 37 million people, and also possibly for humanity as a whole.
That aliens should even attempt to communicate with us, further that they should offer a choice to us beyond our current capabilities, is a clear indication that they are benevolent.
Why not trust in a clearly superior intelligence?
But quite clearly the bill wouldn’t pass. Why? Because the vast majority of individual humans is ruled by fear of the unknown. Truly, 37 million people is a small price to pay for a single successful transaction with aliens, even if there was no carrot. I submit that if aliens came and asked for 37 million people and offered nothing in return, the best decision for humanity would be to offer them.
I’d volunteer to go in a heartbeat, and I might vote yes symbolically. You have to ask yourself, is turning over the people to the aliens akin to a mother turning her child over to a kidnapper at gunpoint, or a mother turning her child over to child protective services?
What if they ask for 370 million on their second trip? If there’s an assumption that aliens are hugely more powerful and could take the 37 million whether we offered them or not, then the whole idea of making a choice is irrelevant. If they’re not that powerful, then giving in only encourages further demands down the line, while we gradually weaken ourselves by offering concessions.
Reduce a population set by 13% to rid said population of natural population controls? Not a good idea. Without the natural population controls we’d regain that initial loss and then rapidly proceed to surpass the carrying capacity of the biosphere like popcorn blowing out of a popper!
It wouldn’t be long before we needed to make another similar deal to harvest the surplus humans. So it’s either suicide or voluntary servitude for the entire species. It’s a lose/lose proposition.
If for no other reason, this is a bad deal.
'Cause they might not, in fact, have our best interests in mind? Personally, when someone says, “Trust me,” it’s a warning flag that I should keep my hand on my wallet.
And as far as them being a “superior intelligence,” by what measure do we assess that? Applying human cognitive “logic” and ethics to an alien species is like trying to teach your dog to appreciate Bach. We don’t know what motivates them. It seems unlikely that they’d travel the vast wastes of interstellar space just to dine on a few million people (or that their physiology would find us digestible) but then, what need or use do they have for the 38 million people specified in the OP? Are they enslaving them, or experiementing on them, or converting them to Scientology (L. Ron did warn us), or what? By tying conditions to their offering of technology that would benefit the rest of mankind they’re certainly casting off any aspirations at unmetered altruism.
As Delmore Schwartz once wrote (and Kissinger shamefully stole), “Even paranoids have real enemies.”
Stranger
What ethic are we testing in the OP? Is it that:
1.Human beings are/or not worth trading for a guarantee of super benevolence?
2. Are we willing to accept a promise rather than a guarantee?
3. Isn’t the question really: Is one or even 38 million lives worth the good derived from the trade-off? or are aliens and blacks an important ingredient in decision making with this problem?
That’s a good impulse, but there is a big difference between “someone” and an entity with much more power and intelligence. Suppose you are a fish, and a park ranger catches you on the end of his hook. Needless to say, you’re going to flop around and maybe even try to bite him. If you had more information, you wouldn’t resist; he’s trying to preserve your community. I’ve heard that when humans go out to capture dolphins for display, the dolphins sometimes come willingly (myth?).
I’d say that aliens with the ability to traverse large distances, to communicate with us in a meaningful way, and to offer something beneficial to us beyond our capabilities, is in a dominant position. I suppose it could be due to something other than intelligence, but at the very least it demonstrates superior power.
Agreed. That’s further evidence that they are of superior intelligence, because they seem to have some idea of what motivates us.
I’m not sure. To use the park ranger analogy again, he has to put bait on the end of the hook. You can’t judge their altruism by their methods.
No, I think in the end, it just comes down to a matter of trust. I believe it is morally defensible to trust a superior intelligence just because ultimately, the superior intelligence will decide your fate regardless of your choice. Better to embrace your fate than to struggle against it.
I think the dolphin makes the right choice to give itself up freely, because at least the choice of which dolphin is taken goes to the pod, instead of the random toss of the net. And ultimately, the dolphins who go are helping the species by spreading knowledge of their species to men.
I guess in the OP we’re talking about a random toss of the net, or worse, not a choice by humanity, but I still think it is the best choice. If the aliens are against us, then we’re doomed anyway. If they’re with us, better to cooperate.
How can you know whether giving in encourages further demands, or engenders respect for having made the right choice?