How far in the future does morality stop mattering?

This is a slightly surreal one (inspired by a daydream).

You are visited by a vision of the future (or time traveller, or whatever). You see that in the future that ebola is going to become airbourne and extremely virulant, then spread around the world via birds (the science isn’t incredibly important). Most of the human race will be wiped out, say c.5,000,000,000 people dead.

You can stop this however, by perpetrating one massacre of 1,000 people, thereby stopping the mutation from occuring. (Somehow you have the means to carry-out this atrocity). There is no time or it is not possible to stop it otherwise (no-one will believe you etc). ie. your only option is to kill the future carriers.

Now, there are three different scenarios

1. This is going to happen in about a weeks time. Most people would say that they would kill the thousand to save the 5 billion. True liberals won’t, (and Catholics), but everyone else probably will.

2. This is going to happen in a year’s time. Again, I think that most people would choose the greater good.

3. This is going to happen in 200 years time. You have to kill the first carriers’ ancestors. This is the one that I wonder about. On the one hand, future people are still people. On the other, is it really right to kill people now, to save those who won’t be born for generations? At what point can you put up your hands and say “I can’t take this into account any more”, thats too far into the future?

In short, what do we owe to our descendants, and should a future life be weighed the same as someone in the here and now?

I refuse to entertain any hypothetical moral dilemma scenario that does not include the option of rescue by intelligent giant squid.

If you really want, the 100 people can be a sacrifice to the intelligent Giant Squid who have the cure.:slight_smile:

Who are the thousand people who die - old, young, ill, healthy?

Who are the 5 billion people who die - same?

How do the thousand people die? Suffering as people do with ebola? How long does it last?

How do I know this guy is really a time traveller and not a psycho?

Will I be caught, tried, and imprisoned? That shouldn’t matter, but I’m afraid it does enter into my thinking here.

What you’re talking about is what’s called by economists “future discount.” And I’d need to know the full cost and the full value before I made a call.

ETA: Would what I did ever be known to the public?

In the abstract, yes you kill them. In reality, no you don’t. Why ? Because realistically, that scenario isn’t going to happen. It requires too many unlikely constraints; realistically, there’d be another way. And, also realistically, few people would trust the word of or evidence given by someone offering such a devil’s bargain.

That being said, it isn’t that different than what we do now. People die from medical mistakes, bad drug reactions and such, but we put up with it because far more people are helped than are hurt due to the errors. This is just an example of knowing beforehand who’ll die. It seems weird because normally you don’t know who you are going to kill - and if you didn’t, you’d just not do it. Knowing what innocent needs to die by your hand and still having to kill them is more of a military situation than a medical one.

I agree, Der Trihs, but this is one of those huge oversimplification thought exercises.

But your parallels to current day medicines and such are good ones. We do, by accepting a certain amount of known risk, in essence agree to kill off a small amount of our population every year in exchange for saving a far larger number, or at least so we hope.

This subject is just begging for a Godwinism.

Currently, as pointed out above, medicine (and automobile travel, etc, etc) has a failure rate that results in the deaths of some of the users. Some of those users may not have been aware of the statistics, but probably would have elected to go through with the treatments anyway. As a whole, society tolerates unpredictable failures (as opposed to deliberate negligence) because the good outwieghs the bad, and because to most folks, the failures seem to be spread around not unlike a “lottery”. Bad luck, no one to blame, we all share the same risks.

But deliberately selecting a group of folks, and killing them for the benefit of others, has typically been seen/judged as some of the worst acts in history.

The issue of whether or not you trust the vision/time-traveller is a good one.

I suppose that I just want to know (among other things) if people consider people who live far in the future to be equal to people alive now.

Mles: If one person had the disease, but if they survived the disease would spread, would you not take action to stop the deaths of millions (although tbh, the person in question would most likely kill themselves if that was the case).

Now that’s an interesting question. If you knew that you were the carrier of a disease that could cause the death of billions of people, would you kill yourself?

I’d shoot for life in a bubble.

I wouldn’t. In 200 years, the world will probably be better off with 5 billion fewer people.

You know, I actually thought about that and wondered. I would have to ask the time traveller about conditions. Perhaps we might have developed colonization of other planets by then, although things aren’t looking terribly good on that front at the moment.

ETA: Still, ebola is a horrible way for people to die.

Laudenum, you make me sleepy!
I haven’t done the math, but if this is 200 years down the road, how many people would you need to kill to assure that the 5 billion who will one day die, will ne’er be born?

I doubt that. Assuming that society hasn’t collapsed ( in which case there wouldn’t be 5 billion people in the first place ), technology by then should make supporting that many people no real trouble. Especially since according to the OP that’s " most of the human race", so we aren’t talking about some future where Earth has been Trantor-ized and is a big metal sphere with trillions of people.

That should be another thread, but tbh, I don’t really think that there is a choice for a decent person. Even if you live in a plastic bubble, and the chances of you spreading the disease are millions to one, can you still take that risk? Could you ever play some kind of twisted lottery with so many lives?

Suicide is a lot easier to talk about than to do.

The disease in question is a virulent ebola (I heard somewhere that there is a small chance that this may happen, but that it would burn itself out quite quickly).
From what I know of ebola, I don’t think that I would risk inflicting it on billions (for that matter euthanasia might be preferable to dying of ebola).

It’s still a lot easier to talk about than to do. There’s something enormously terrifying about the idea of dying Right Now. If you’re already coming down with ebola, that’s one thing. But if you’re fairly healthy at the moment, this becomes a hard thing to do. Not saying you wouldn’t do it. Not even saying I wouldn’t do it. Just saying that it’s harder than you think it is.

Oh, I get that. It’s easy to mouth off about it but the reality is different . At the same time (excluding Tuvix) I can’t see anyone choosing their own life over billions.

What’s Tuvix?

A character created in Star Trek : Voyager via transporter malfunction, due to two people being fused together. In order for them to live again s separate people, he has to die - or be unmade, whichever term you prefer.