Walk away, dude.
If they’re such brilliant and omniscient people, they can figure out a way to guide history without killing people. They kill because it’s easier.
So fuck them. Let the world burn.
Interesting scenario.
Suppose, you decide to rat them out.:eek:
How would you get anyone to listen and take you seriously?
Go on twitter?
Post a youtube video?
Call 911?
Post on FB?
Put a straight dope post warning of them?
Publish a book?
Call into a talk radio show?
Not that easy to rat them out either as they would deny everything make you sound like chicken little
I would out them in a heartbeat. They are murderers. If their ersatz off-brand-Psychohistory works, they should, at this point, be able to prevent war without killing anyone (lead the politician’s husband into a honeytrap instead. Sabotage the engineer’s experiments directly. Cook the tycoon’s books and get the secretary arrested for it)
“Not even in the face of Armageddon!”
Or with a little extra work, they can change the world only with positive actions. Make someone win big in Vegas. Have the malcontent meet the love of his life. Help the alternate candidate’s message go viral. They can be guardian angels, not executioners.
But they won’t. There’s a reason autocrats default to killing people - because its easier than helping them.
True.
Exactly.
Without challenging the hypothetical, The aspect that troubles me the most is popping off the husband of the politician, or the secretary of the tycoon. Why not take out the offending person rather than an innocent bystander?
I see you made a reference to high profile targets, but if this group is as resourced as it would need to be, and only kills 12 people a year, I’m sure they can arrange accidents for even high profile people. Realistically short of POTUS, and a small handful of others, how many individuals have enough security to prove a challenge to apparently professional assassins.
I feel like starting a new thread: if challenging the hypothetical resulted in the death of your mother, would you still do it? Of course the bulk of responses would be from people saying their mother wouldn’t really die, or that their mom is already dead, so it wouldn’t go anywhere.
I was wondering about this myself. The subject of the story seems to pose more of a potential social disruption than any of the examples, or the innocent associates of the examples. The star chamber should have been able to predict this would happen and taken the subject out before he or she gained the dangerous knowledge.
Sigh.
I recommend the book if you’re interested in the mechanics. It’s quite good.
The central question of the thread is not intended to be how statisticians might work in five centuries. The central question is the variant on Omelas/train dilemma: would you accept the deliberate murder of a dozen or so innocent people every year, if in doing so it prevented war?
If all you want to talk about are the mechanics of the book, I can start a thread in Cafe Society on it.
For the central question, disregarding the sci-fi details, I’m forced to answer “yes” – between two very bad circumstances, a dozen murders or a war that presumably results in the deaths of thousands, a dozen murders is very clearly preferable.
I don’t think there’s away to frame this question in a realistic way without sci-fi magic – in the original framing of the question, my answer would be “compel them to achieve the same results using non-violent methods” (similar to what Alessan suggested). A sci-fi magic scenario would be something like “Godlike aliens grab you and show you two visions of the future: one with world government that murders 20 people per year but is otherwise perfect and peaceful, or one with widespread unrest and disruption resulting in thousands dead from war each year – you must choose one or the Earth will be disintegrated”.
It’s hard to do hypotheticals in a way that focuses on the exact point you want to discuss!
I wouldn’t rat them out.
I’d kill the lot of them, for their arrogance. They think they can sit in judgement of who is sacrificial? Fuck that shit. Absolute power corrupts absolutely. I don’t trust people like that, and neither should you.
Note: I am for killing Veidt at the end of Watchmen, for the same reason, and if I were Remo Williams, I’d have killed everyone in the organization that recruited me. They, too, should have seen it coming.
“We recruited you to be part of an organization that murders people who think they are above the law.” “Which law do you answer to, then?” " Ummm…" BANG
Fiat justitia ruat caelum.
I don’t buy the argument. You’ve got this advanced statistical knowledge, but you cant send Hitler to art school to prevent WWII, you’ve got to kill him instead.
The reason these hypotheticals never work is that the world doesn’t work this way. The hypotheticals require you to stipulate that you know the future outcomes exactly, and so it’s either the train crashes into the car with one person or the train crashes into the car with five people, no other options are possible.
But the reason people refuse to make the split second decision to hypothetically throw the switch to change the hypothetical train to the track with the hypothetical car with one hypothetical person is that real life doesn’t usually work this way. And when it does, you usually don’t have the time to think about it, you have to react on instinct, not logic.
Sure, we make utilitarian arguments all the time. There’s a river with a ferry crossing, and every couple of years a ferry crashes and sinks and kills a dozen people. Let’s build a bridge! But bridge building is dangerous work, going by previous projects of this type it’s likely that 5 workers will be killed building the bridge. Paralysis!
No, we build the bridge. We also try to improve worker safety, and in fact it’s a lot safer to build a bridge today than it was 100 years ago. So in the real world we can have both. We build the bridge, and we keep the workers safe as well. Of course we can’t get all risks down to zero, because all men are mortal. That doesn’t mean we can’t improve and are stuck with a utilitarian calculus of shitty situation A versus slightly less shitty situation B. Let’s figure out much less shitty situation C and go with that one. And I say this sitting warm and dry in a comfy chair with a full stomach looking out the window at miserable rainstorm and chatting about hypothetical situations, rather than breaking my back as a subsistence farmer in medieval Europe. Win-win! Positive sum! Progress! It’s not an illusion.
As soon as a prediction is shared with the subject of the prediction you run into all kinds of paradoxes. This situation, actually, could be reduced to Newcomb’s Paradox in certain conditions. E.g. They say: “We believe in giving chances. But we’re not stupid. We’ve already predicted whether or not you will rat us out. If we predicted you would, we’ve already put in motion an unstoppable plan that will kill you tomorrow. If we predicted you wouldn’t, there’s no plan in place. Now go home.”
So, do you rat them out if they say that?
If I make it through tomorrow, then yes. They’re clearly idiots.
Not idiots. That’s the simple prediction paradox, if they predict you will do something that involves your free will, and tell you, you can always falsify the prediction.
IMO this demonstrates the problem with the “trolley dilemma” generally.
There is a lot of uncertainty to one side of the dilemma, you only have their word for it that these murders are actually preventing war, and the murdering cabal actually has society’s best interests at heart. Just as in most “trolley dilemma” situations, you are presented with definitely, unambiguously, killing someone right now, versus hypothetically based on your prediction of what will happen in the future someone dying in the future.
Yes. The hypothetical is silly. It’s like asking “if selling the moon’s green cheese would make you rich, but would also ruin the tides as well as a million songs, would you do it?”
The hypothetical itself is based on a false premise: that someone can predict the course of history well enough to know that the removal of certain people will bring about such-and-such a result. This is not something that we’ll learn to do with time because the human brain just doesn’t work that way. (In addition, the ‘removing them doesn’t require killing them’ argument is sound.)
The premise is merely a power fantasy: “I get to cold-bloodedly kill twelve people and still be a Good Person.”
I really want to say that the ends never justify the means, but the hypothetical is just too perfect. Its a heavily skewed version of Rorschach vs Ozy. But, for the record, I would take a very unprincipled stance and let the cabal continue.
Well, particularly for the politician, her death might lead to bad solution - conspiracy theories that Eastasia caused her death leading to a spiral towards war, for instance. The scientist is actually killed. I can’t come up with a scenario where leaving the tycoon alive is necessary vs. his secretary - maybe his premature death would change the path of his empire in bad ways? And killing the secretary is one death instead of multiples that would be required to prevent that bad solution (maybe multiples of his heirs?)?