A Time Travel Conundrum (Shades of Newcomb)

Since it’s not clear that Steve’s winnings are directly connected with how many stones are put in the box, I can’t see how this is analogous to Newcomb’s Paradox. The Newcomb’s Paradox doesn’t involve time travel that I am aware of, nor are the actions of the player, indirectly involved with the outcome of the payout.

It appears in your proposition, there is no causal connection between the rocks you put into a box and how much Steve wins. Right?

Is there any argument for filling the box with rocks? It’s not going to make Steve any richer, since there isn’t any sort of causal connection here between the box and the lotto. (This is an overt difference between this scenario and Newcomb’s paradox.)

Bascally, the time traveler has come back to say, “Here, I know of this neat coincidence, which happens to prove that you are unable to control your fate!” He’s not offering you anything and he’s not enabling you to offer Steve anything. He’s merely come back to mess with your head and perhaps shatter one of your illusions. Oh, and to try to talk you into moving rocks around for no good reason like a silly person.

The rock in the box: I burning your wallet!

Given the chance that you’re in a deterministic universe (ie, somehow, events will occur that result in the time traveler’s claim being true aka the Bill and Ted model) it seems like a particularly poor idea to attempt to force a paradox, then hang around to protect it. While the universe could decide to have a hot naked chick come along and offer to have sex with you if you’ll only take n stones out of the box, there are a whole lot of other scenarios that end with you as a bloodsmear on the beach and someone else putting the “proper” number of stones in the box.

Well, fortunately for us “not helping the timetraveler be right” types, in post 22, Frylock adjusted the rules so that only stones that we the beachcomber put in and take out count. So, all the bloodsmear cases no longer satisfy the scenario. (The hot naked chick scenario still does, but that fact doesn’t really compel me to abandon the box. :slight_smile: )

Ah. In that case, let’s hear your position, if you have one you’d be willing to share.

Well heck, I was hoping one of you people would show me the way here. :wink:

I don’t have a position, but here’s what I think.

The theories of causation I know about are the counterfactual account, the manipulationist account, and this other account I forget the name of. As far as I know, none of these accounts would make the relationship between my actions and Steve’s situation a causal one.

The counterfactual account tries to analyze causation in terms of certain counterfactuals. But in the case I’ve given, there are no counterfactuals that hold about Steve’s situation that turn on the question of what I do with the box.* If I had decided to put a different number in the box than I actually did decide, Steve still would have ended up in exactly the same situation.

The manipulationist account says there is a causal relation between A and B just when you can manipulate B by manipulating A. But the thing to say here is similar to the thing to say about the counterfactual account–you can’t manipulate Steve’s winnings by manipulating the number of stones in the box.*

Meanwhile, the other account I can’t remember the name of involves the question whether there is information that is somehow preserved and transmitted from A to B. This is not happening in the case I outlined.

So as far as I can tell, there is not a causal relation here, at least not on any account of causation I’ve heard of. Yet it is very plausible to think that one can not deliberate about things that are not within one’s power, and I don’t know how “power” could be explicated in any but causal terms.

So as far as I can tell, it is not within my power to say how much Steve is going to win, and this should mean I don’t think it possible to deliberate about whether and how much I’m going to help Steve win. Yet I do think it possible–in fact, I find it impossible (for me) not to think I am in a position to deliberate over what to do about Steve’s situation.

Perhaps a new account of causation is needed. Or perhaps instead what is needed is a different account as to what can be deliberated over. Or maybe I need to find a way to divest myself of the unshakable feeling that I have some responsibility concerning Steve’s situation. Or finally, maybe I need to fight the hypothetical and deny that there could be such a situation as the one I’ve described in the OP.

So basically, what I think is, WTF mate?

-FrL-

*Both asterisks above point here. It might be argued that there are counterfactuals that hold concerning Steve’s situation that turn on the question of what I do with the box. Since all I know is that the number of stones in the box is the same as the number of millions Steve wins, I am in a position to say something like “I now intend to put five stones in the box, but were I to put 10 stones instead, Steve would win 10 million, and were I to put no stones, Steve would win nothing” and so on. This looks like a set of counterfactuals. But they don’t seem to be the kind that are supposed to support the existence of a causal relation. For these are, as it were, epistemological counterfactuals rather than metaphysical ones. What’s that distinction? Well, I’m making this up as I go, but: An epistemological counterfactual is one whose truth or falsity turns on what the person making the counterfactual judgment knows or doesn’t know about the situation. A metaphysical counterfactual is one whose truth or falsity turns just on what the objects are that the judgment is about, and what their actual relationships are. So, for example, my judgment “If I were to put 10 stones in instead of the five I’m going to, Steve would win 10 million instead of 5 million” is an epistemological counterfactual because it is only true by virtue of the fact that I don’t know how many stones I am going to put in. Meanwhile, the counterfactual “If I had put 10 stones in the box, Steve would have won 10 million” is a metaphysical counterfactual because it is false by virtue of facts about the stones and the lottery and Steve (etc) and their relations only. Anyway, it seems like counterfactual (and manipulationist) theories of causation want to talk about a metaphysical relation, not an epistemological one.

That was hardly clear at all, and I’m hardly clear about what I was trying to say, but I’ll leave it up there in case anyone wants to chew on it and comment.

A Time Traveler comes to you and says, “See that box over there? Put some rocks in it, and how ever many rocks you put in it, something of that number is gonna happen to someone tomorrow.”

It could be a guy winning the Lotto. It could be how many cars crash in a pile up. It could be how many gummy worms a little girl eats. It could be how many stars go supernova. It could be how many times a bear shits in the woods.

“Okay,” you say.

You end up putting in 3 rocks, because they turn out being heavy, and you realize there’s no real point to this.

Turns out, something in the amount of 3 happened to somebody the next day. Why just single out the guy who won 3 million bucks? There’s no relation between the box, the information the time traveler gave you, or the outcome in some random circumstance somewhere.

I thought your post was pretty clear, for what it’s worth. I think I would be perfectly willing to deliberate over decisions when the appropriate epistemological counterfactual concerns are present. But when it comes to ascribing responsibility/blame, etc., then an account of causality based on metaphysically flavored counterfactuals becomes the more natural one. There is a slightly uncomfortable incongruity here, but perhaps there’s no great reason why the two uses of the language of causality should match. Though one could, of course, still press for further explanation/justification of just why I would act/speak in this way.

Here is how I look at it:

The chance of Steve winning the lottery is one in tens of millions.

So, the chances of me being able to successfully put even one stone in the box is also one in tens of millions.

It is much more likely that I would suffer a stroke or heart attack, get hit by a car, etc. in my attempt to put stones in the box, than to help a stranger win the lottery.

No thanks. I walk away. No stones in the box. No millions for Steve.

Interesting. But, presumably, prior to being told of the correlation between your putting stones in the box and Steve’s winning the lottery, you believed the former had decent chances of occurring (certainly not as low as one in tens of millions) while the latter had very poor chances. Why is that, once you were told the two were equivalent, you switched to taking both as having very poor chances, rather than, say, taking both as having decent chances, or modifying probabilities throughout in a more drastic way, in updating the probability distribution underlying your beliefs by conditioning upon the new knowledge given by the time traveler?

That is to say, if you could reason as you did, what would stop you from reaching different conclusions by performing the analogous reasoning of “The chances of me being able to successfully put even one stone in the box is pretty good. So, the chance of Steve winning the lottery is also pretty good.”

Great point. But I would put the probability of being unable to put a stone in a box at far more likely than one in ten million. Maybe one in a thousand?

And the “failures” of not being able to put a stone in a box are likely to be fatal or crippling to me. All to help Steve? I think not. Fuck Steve and everyone who looks like Steve. :wink: