Roadfood, I get the impression we agree more than we disagree, though I’m not sure our agreement is 100%. I do disagree with your claim that expected value is meaningless; I claim that the expected value is relevant, but is far from being the only deciding factor.
I know I’ve said this before, but again, take a lottery with a one in 14 million chance of winning. Let’s say the jackpot starts at $2, call this particular game with this particular jackpot game A. Should you play (a general “you”, not you specifically)? Now I’ve see enough of the human race to feel confident there are some idiots out there willing to take this bet, but hopefully their population is small. This is clearly a bad idea.
Now let the jackpot rise. At what point is it reasonable to play? $1,000? $100,000? $1,000,000? $14,000,000?
I agree that the “breaking point” is not likely to be $14,000,000 (where the expectation changes from positive to negative). There isn’t much difference between jackpots of 13 mil and 15 mil.
However I do claim that the “breaking point” jackpot (though it might be more appropriately described as a “grey interval” of jackpots) is a function of expected value, (among other things).
Game A is a poor bet because the $2 jackpot is too small to offset the remote chance of winning. But I also claim it is reasonable to play when the jackpot reaches a certain point; this point isn’t determined by the expected value, but the expected value is a factor in determining that point.
You might claim that if you play only a single game, however, that you’re virtually guaranteed to lose anyway. I would counter that people really do win. Now, I know that’s a cliche, but it is true: a one in 14 mil chance is significantly different from a zero chance. It’s a remote chance, but there certainly is a chance you will win.
Still, let’s look at it another way. Back to Game A ($2 jackpot). What happens if we fix the jackpot and let the odds of winning increase? At what point do you play? When your chance of winning is 30%? 50%? 90%?
Again, I claim there is a point where it becomes reasonable to play. Again, it’s not determined solely by expected value, but it is a function of expected value (among other things).
One more thing, back to a specific example mentioned earlier:
I would claim there’s a damn good statisitical reason for playing this game. Sure, you’re most likely to lose, but you only lose $1. Additionaly, a 1/10 chance of winning means there’s a hell of a “reasonable” chance you’ll win that single game, and win huge. And I claim that this decision is in part based on the fact that the expected value is so extraordinary. This, of course, is not the only deciding factor. It’s also a good game because it’s affordable to play, it’s winnings are significant, and there’s a “reasonable” chance you will win (even if you play only once).
But that’s just not true at all. From a statistical standpoint, there is no difference between playing the game offered by one guy many times, and playing an equivalent game offered by many guys one time each. What you’ve done above is to just use a somewhat creative way to change the question from one involving a single play to one involving many plays.
I have never, not once, anywhere on this thread disputed that expected value is meaningful in a large number of trials. My question has always pertained to a single trial (ok, to a very small number of trials with respect to the odds of winning, but can I just say “single trial” as a shorthand for that from now on?). Your example above simply changes my question from “what does expected value mean in a single trial?” to “what does expected value mean in a large number of individual trials of the same game offered by different guys?”
I’m actually very surprised that you say: " if a game is worth playing many times, it’s worth playing once." How can that possibly be? In the ace game, if I can play ten thousand times, it’s essentially a given that I’ll come out ahead. Why? Because the probability indicates quite clearly that over those ten thousand plays, I’ll almost certainly draw enough aces to win enough to overcome the ten thousand dollars it cost me. But the key factor that makes playing it many times “worth it” is that I get enough plays to overcome the odds against drawing an ace.
But a single play is completely different. The probability does not say that I can draw enough aces to come out ahead, in fact the probability indicates quite clearly the exact opposite! The probability says that in one draw, I will almost certainly notdraw an ace, and therefore will just lose. The fact that losing a dollar doesn’t bother me while winning a thousand makes me really really happy doesn’t change that probability.
Sure it is. But again, in a large enough number of trials so that the probability is positive that you’ll win enough times for that expectation to come into play. The expectation just does not apply to one trial. It doesn’t apply, and you still have not demonstrated that it does. Show me an example of a single trial – not your example above which is really just a series of single trials that add up to a large number of trials – but an isolated, once-in-a-lifetime, single trial in which “expected value” has meaning and is useful in some way.
Then, perhaps, the reminder of our disagreement comes from the fact that you still misunderstand me. I do not, have not, and never will claim that expected value is meaningless. I claim that in a single trial (or a very small number of trials relative to the odds of winning), it’s meaningless. Please, understand the distinction between those two, it’s the crux of this whole thing.
I claim again that in a single trial (or a very small number of trials relative to the odds of winning), it has no relevance. Show me that I’m wrong, don’t just keep telling me that I am.
I have presented definitions of “expected value” that support my claim. I’m still waiting for some equivalent evidence that supports yours.
Ah, perhaps this, then is really at the heart of our remaining disagreement. You ask that question as if you hold as an undisputed fact that there is indeed a point at which it becomes reasonable to play, and the only disagreement between you and I is where that point is.
But you see, my answer to your question is that it never becomes reasonable to play. Make the jackpot arbitrarily large. Make it a gazillon dollars. Make it the entire GNP of every nation on Earth. Make the cost of a ticket a peny. Heck, make the cost of a ticket a tenth of a cent.
It just never becomes reasonable to play, when we stick to my original starting conditions that the number of tickets you can reasonably buy over your lifetime is very, very small compared to the odds against winning.
Why you ask? “Heck, if you won you could be ruler of the Earth, have anything and everything you’ve ever wanted or ever could want! And all you risk is a tenth of a cent!!”
But you see, none of that changes the simple probability. I am notgoing to win, I’m just not. So it’s never “reasonable” to play, when “reasonable” is defined strictly by the statistical numbers.
Now, of course if you define “reasonable” in the loose sense of “what do you have to lose compared to what you win?”, then it’s reasonable. But that definition strays from the pure statistics. It strays from the concept of “expected value”. That definition of “reasonable” doesn’t come from the expected value, it comes from “risk a little to gain a lot”. And, perhaps most importantly, if you apply that definition of “reasonable” to a lottery where the theoretical expected value is negative, and you apply it to a lottery where the theoretical expected value is hugely positive, you get the same answer. It’s just as “reasonable”, by that definition, in both cases.
Well, yeah, of course people win. But why is there almost always a winner in every lottery? It’s only because there are almost always more than 14 million tickets sold!
I mean, let’s bring in another wonderful hypothetical: Suppose the lottery only sold one ticket per game. Same dollar price, but just one ticket per game. All the lotteries are like that, and have been from the beginning. How many winners would we have seen up to now?
See, just like Chronos, your logic here is mixing up a true single trial with a whole bunch of individual trials that, in aggregate, amount to a large number of trials. If the lottery sold only one ticket, you wouldn’t see a winner in your lifetime. Your grandchildren, and your great grandchildren, and your great, great, great grandchildren, would all live and die without seeing a winner. Why? small number of trials relative to the odds of winning.
Well, again, this gets to what we’re using as the definition of “reasonable”. The odds of winning are one in ten. Now, whether those are “reasonable” odds depends on your definition of “reasonable”, doesn’t it? If you think they are “reasonable” odds, fine, that’s your choice. If I think that one in ten is not “reasonable”, that’s my choice.
But neither definition of “reasonable” changes the simple probabilistic fact that given one play, the most likely outcome is that you’ll lose.
And I claim, once again, that if you use expected value as a factor in your decision of whether to play once, you are simply mis-applying the concept of expected value. It’s certainly your perogative to do so, but you can’t justify it by any definition of “expected value”.
I wasn’t talking about that. In Pedro’s game ($1 ticket, 10% chance of a $1E18 payoff) you said you’d buy a ticket. I quoted it once above, but I’ll do it again:
Presumably in the same game, but with a payoff of only $1, you wouldn’t buy a ticket. What I see here is that
the payoff matters to you.
Not just the odds of winning, but the payoff.
In a single trial.
It might not matter much, but that’s not the point. The point is that
the payoff matters to you.
In a single trial.
This is why I’m saying that even to you, the expected value is not meaningless. I’m trying to explain, in my above posts, a way to think about the decisions rational people might make in evaluating these gambles. I have offered a way to explain, using the dreaded “expected value,” human behavior in evaluating risk and potential reward. If you don’t like it, fine. If you don’t understand my model, that may be because I’m not explaining it well (perhaps someone else can try). But it’s not “meaningless.”
Understand that I am not and have not been trying to explain the one true unassailable “right” way of betting. There’s no such thing, because different people have different goals. You have to define your postulates and assumptions first. You say, for example, that
This tells me something about the shape of your personal happiness curve. Namely, it tells me that for you, h($N+$13E6) = h($N+$15E6) ($N your current assets). There’s nothing impossible about this (though I would guess that it would be more common for h(…) to be simply slow-growing, not horizontal, at these values).
Arrgh. First, I did make an attempt to explain how they were different. Second, I made an attempt to explain why you might not find either lottery worth buying a ticket. I’m sorry that you didn’t understand.
This does not mean that the expected value is meaningless. As I pointed out above, your own personal decision process does evaluate the “expected value” for some single-trial games. In particular, for you, odds of 1/10 are not too small to be overcome by a sufficiently large payoff. Odds of 1/14E6, however, are too small, for you. It seems to me that you (similar to Cabbage#101) are arguing that very small odds of winning should be rounded to zero. I am essentially arguing that rather than ignore small odds, a reasonable model can be achieved by discounting large gains. Qualitatively the predictions of the models are somewhat similar.
I’m going to ignore your last example (changing the value of the coin being flipped), because it’s stupid and has absolutely no relevance to this discussion, other than perhaps as a reductio-ad-absurdum of one of your own previous examples (the side bets in #82), which I also ignored.
Let me describe a really simple example. I resist doing this, because its quantitative predictions are not meant to be believed. Understand that this is only to be taken qualitatively; the happiness function is not meant to accurately describe the happiness function of a normal person. Its primary advantage is that it’s simple to talk about.
Consider a person, Zorro, whose happiness function h(z) = log[sub]2[/sub] z. That is, every doubling of his wealth increases his happiness by one unit. Suppose that initially Zorro’s assets are $2[sup]20[/sup] (about $1E6), so that his happiness is 20. Yanni offers Zorro a single-trial wager: If Zorro rolls a 6 on a six-sided die, he receives $10; if he rolls any other number, he pays $1. What is his expected happiness if he takes the wager? This is easy to compute; the result is 20 + 5/(6*2[sup]20[/sup]ln 2) = 20.00000114654. This is, notice, (very slightly) greater than 20, for an increase in his expected happiness; so Zorro takes the bet. If you do a first-order Taylor expansion (linear approximation) of h(z) about $2[sup]20[/sup], you get essentially the same result, since the stakes are low. But suppose that instead, to make it worth Yanni’s while, Yanni ups the bet. If Zorro rolls a 6, he receives $102[sup]19[/sup] (about $5E6); if not, he pays $2[sup]19[/sup] (about $5E5). Now what is his expected happiness? The same calculation now gives 19.59749375012, a major decrease in his expected happiness; so Zorro doesn’t take this bet, even though it’s just a scaled-up version of the previous bet. Zorro would, of course, be happy to repeat the first bet many times, however.
Now Zorro is not going to bet on many lotteries; for a $1 ticket with 1/14E6 odds of winning, he’s going to want to see a jackpot on the order of $1E12 before he starts to think about it. In particular, he sees very little difference between a $13E6 and a $15E6 jackpot, and he won’t bet on either. But in principle there is a jackpot which would convince him to bet.
(You have a different happiness function than Zorro’s. I don’t know what it is, but if as you say no possible payoff can overcome the odds of 1/14E6 on a $1 lottery, then what this says is that your happiness function is horizontal past $N+$14E6.)
So this simple example already has many of the qualitative characteristics of your personal decision process: not likely to bet on very-low-odds games, while following expected value for low-stakes games. But all I’ve done is replace the expected monetary return with an expected “happiness” return.
Roadfood, I think our “disagreement” boils down to this quote of yours:
However, I’m not so sure it’s a disagreement; maybe we only misunderstand each other.
I’ve already mentioned that the expected value for a single game loses its relevance as the chance of winning the game diminishes. If you have only a 1/googol chance of winning (however large the prize may be), and you can play only a single time, I agree, the expected value is absolutely worthless. You’re going to lose that game.
I still claim that expected value has relevance when we’re talking about odds that are not so outlandish. Furthermore, I’ll also agree that a one in 14 mil chance of winning the lottery is pretty outlandish. Expected values don’t play a large part in the million dollar lottery (though I do claim they are a small factor, this is not a point I’m trying to make in this post).
The games in which the expected value has a strong influence are the games in which it’s “reasonable” to expect to win a single game.
For the sake of argument, I’ll define “reasonable” here to be at least a 1/10 chance of winning. This number is somewhat arbitrary, but I think we can agree that if an event has a 10% chance of happening, it’s not so outlandish. For the sake of argument, we can adjust this figure if necessary.
Before we go any further, do you still claim that expected value is meaningless for a single game if the chances of winning are 10% or greater? In fact, now that I reread what I just quoted above from you:
I think we may be in agreement here as well–Expected values do matter when the odds of winning are “reasonable”.
If we do agree on this point, we can drop it (though I’ve enjoyed the discussion). If we don’t, then I’m willing to explain my case further
Roadfood, I still don’t understand how you argue that expected value has no meaning in a single trial while saying you would buy a $1 ticket with a 1/10 chance of winning $1E18. I have presented a model that predicts behavior similar to your betting behavior, but it uses expected values; as long as some of your decisions depend on the payoff in some cases, it seems like this is unavoidable. Can you explain why you would buy that ticket without taking the expected value of some function?
Sure, but only as compared to the cost of playing, not related to the odds of winning. See, I still say that you are failing to fully understand the distinction I’m making. “Small risk for large reward” is, as I’ve said repeatedly, a perfectly reasonable decision making tool. And that’s what I’m doing above: small risk, large payoff, sure, no harm in playing. I have never said that the payoff is not relevant, nor that the comparison between cost-to-play and payoff is not relavant. Both of those are absolutely relevant and valid.
Where my dispute comes in is, as I’ve said again and again, when you try to apply the concept of expected value to a single trial to further split the cost-vs.-payoff to claim that a higher payoff means that it’s a “better” play. If you compare two single trials wherein each one the cost to play is the same and the odds of winning are the same (and very large against it), but one has a higher payoff, it’s just nonsensical to claim that the higher payoff means that it’s a “better” play.
It would be a better play if you could play enough to have a reasonable expectation of winning. That’s what expected value means, that’s the definition of the bloody word, as I showed. Go look it up on Google.
The fallacy that you and Chronos and Cabbage and Pedro are all victim to is that you are not really separating a single trial from a large number of trials. You say “single trial”, but as Chronos and Cabbage showed in their recent examples, in your mind you’re really (unconsciously maybe) expanding that to many trials, in which expected value has meaning.
I’ll ask this again, at the risk of being accused of asking the same thing over and over: Show me an example of a true single-trial, not a single-trial as a part of many trials, but just a onetime in the life of the universe trial in which expected value is a useful concept.
The fact that rational people use something in their decision making does not, in and of itself, establish or prove the validity of that something. Please, again, try to remember what I’m actually saying. Of course people use expected value in their decisions about playing the lottery, that’s clearly evident in this thread.
I still claim that those people are mis-using the concept of expected value. They hold the belief that it tells them when it’s “better” to play the lottery, but the reality is that their “better” plays are no different than a “worse” play, statistically speaking. Does it “feel” better to play when the prize is large? Sure. Is it more fun? Sure. Will you get more if you win? Sure. Are you going to win? No. So how can it be better? It would be better if I won, and if my grandmother had wheels she’d be a little red wagon.
What I’m arguing is that you can’t separate a single trial from a large number of trials, since a large number of trials is a bunch of single trials. The question is always “Do you want to play one more time?”.
And your Google definitions of expected value all work perfectly fine for a single trial. The expected value is the average value you would win if you played many times. This does not say that you actually did play many times; it just says that if you were hypothetically to play many times, that’s what you’d get.
[Should have done this last time]:
Ok, if it’s not meaningless, than it has a meaning, right? Kinda follows. So give me the meaning. Give me the definition. Give me a one or three sentence definition of the term “expected value in a single trial”. No more examples, no analogies, no more “would you play this game?”. Just give me a straightforward definition in a few sentences, something like those definitions of the term “expected value” that I cited from Google.
I already have done so, and several of your Googled results also did so. I’ll quote one of them:
Earlier I described at great length the concept of “value”; we have assumed throughout that we know beforehand the “probabilities.” Here is a definition that makes no reference to “trials” at all. Plug in your concept of “value” and it will give you a number before you play the game.
I don’t understand this. I’ve described my algorithm model above; can you describe your algorithm? Given an opportunity to buy a single ticket for $x, with a chance of p=1/n of winning $y, when do you buy the ticket? When do you believe it is rational to buy the ticket? Why do you believe these algorithms differ (if they do differ)?
I know what “expected value” means. But you may have noticed that not all of your Googled definitions are the same. Some define the expected value empirically (these are the ones that make reference to a statistical average over a large number of trials) without describing how to calculate the quantity; others define a quantity calculable from the payoff table, without describing what it means empirically. In a formal presentation of statistics, one of these will be the formal definition and the other will be presented as an equivalence. Which way it’s done is a philosophical issue. But the second version of the definition (“expected value” == weighted average of all payoffs) has “meaning” even before a single trial, in the same way that I can say a fair coin has probability 1/2 of coming up heads, even before seeing it flipped.
As I mentioned above, I am doing this, and quite consciously, because it seems to me to mirror much more closely the behavior of the real world, which is what I’m trying to describe. I don’t believe it’s a “fallacy”; it’s a postulate which attempts to reflect typical human situations and, in my opinion, does a better job than your postulate. As we have been pointing out, human lives do not typically consist of a single random trial with predictable deterministic outcomes. As I also pointed out earlier, I feel that that idea (a single life, guaranteed to have exactly one random Bernoulli trial followed by one of two, perfectly predictable, possible outcomes) is far enough from the universe I see that it’s more an exercise in philosophy to me than in statistics; there are also too many things that are unspecified.
In this strange alternate universe, does some mystical being just appear and approach you with this proposition? What reason do you have to believe that he’s anything more than a hallucination, let alone trust everything he says? Has he appeared in the past (so that you have some guide to his reality)? If so, aren’t there altruistic people with too much time and money on their hands who would set up a foundation with some of their winnings to help bilk this mystical-being-with-no-concept-of-expectation-value whenever he returns again? If not, why would you trust him with any money at all?
Well, I would say that an action performed by “rational people” is almost tautologically “rational,” which is what you were asking. But I suppose you mean to say that these otherwise-rational people are acting irrationally in these cases.
There’s a subtle distinction between this case (“should people play the lottery?”) and the case you present above (“a one time in the life of the universe trial”), which probably comes down to philosophy in the end. In the first case, there are presumably millions of people choosing whether to play the lottery, so it really isn’t a single-trial case; it’s vulnerable to collusion between the allegedly-independent players to reduce their variance while maintaining their expected winnings, for one thing.
That is just nonsense. Go find a statistician and show him the above paragraph. Then come back here and tell me how long it was before he stopped laughing.
If you really believe that you can’t separate a single trial from a large number of trials, then you must also believe in the gambler’s fallacy, right?
This is the fallacy of the gambler who stands at the roulette wheel and watches red come up nine times in a row. He bets everything in his pockets on black on the next roll because he believes that black just has to be “due”. After all, what are the odds against getting nine reds in a row? Pretty huge! But the odds of getting ten reds in a row are even huger! So black’s just gotta hit!
But that’s completely wrong. The gambler is falling victim to the erroneous belief that the roulette wheel must have some kind of a “memory”, so that the tenth spin somehow “knows” that the last nine spins all came up red, and so black just has to hit. Or, to put it in Chronos’ words, that the tenth spin is always a part of a large number of spins that includes the previous nine. But the wheel has no memory, and each spin is a separate, independent trial, with no “connection” to any other spins.
What you’re doing is very similar, except that you’re doing it before the fact instead of after the fact (where the “fact” in the gambler’s case is the nine previous spins, and the “fact” in your case is yet-to-be-run trials). You are somehow thinking that any single trial must somehow be a part of some larger series of trials, and will in some bizarre way have “knowledge” that it’s a part of a larger series.
It seems that you just don’t understand what the term “single trial” actually means. Let’s say a write a computer program that simulates the lottery. I set the starting conditions to be that I buy one ticket, and run the program. It tells me the result, which is that I lose, of course.
That is a single trial.
Now I run the simulation a second time, same starting conditions. And, darn, same result.
That, too, is a single trial.
The two trials have no relation to each other. They are each independent, separate, unrelated, single.
Um, yeah, and that’s exactly why it has no meaning in a single trial! If you were hypothetically to play many times, that’s what you’d get. Your own words: “The expected value is the average you would win IF you played many times.” If. IF. IF! That “if” is an integral part of the definition, you can’t just ignore it and apply the first part of the definition to a situation where the condition of the “if” isn’t present.
Try this: “I’ll flip a coin and pay you a dollar if it comes up heads.” Don’t we all agree that the “if” in that sentence is crucial to it’s meaning? The reverse is that “if it doesn’t come up heads . . .” well, in point of fact, given just that one sentence, you can’t really say what will happen if it doesn’t come up heads, because that sentence doesn’t explicitly say (true, in normal English usage, it’s reasonable to infer that if it doesn’t come up heads, you won’t get the dollar, but the sentence doesn’t explicitly say that, right?)
If we put my sentence into math terms, we can say that:
A=“coin comes up heads”
B=“I’ll pay you a dollar”
And then the sentence can be expressed as:
if A then B
Right? But if “not A” then what? Again, the expression doesn’t explicitly say. B may or may not be true, but we can’t say based on that expression alone. In other words, when A is not true, that expression alone doesn’t tell you anything useful about the state of B. Any dispute with that?
Now lets put Chronos’ sentence into math terms:
A=“you play many times”
B=“the expected value is the average value you would win”
So now, again, we have:
if A then B
And, also again, if “not A” we just don’t know anything about B, because the expression doesn’t say. The expected value might equal the average, or it might not.
And that is all I’ve been saying. When you don’t play many times, you just don’t know anything useful about whether the expected value will equal the average or not.
Actually, I’ve enjoyed it, but I’m growing tired of going round and round about it.
And, I’m an idiot.
No, not because I’ve been wrong about anything I’ve been saying (which I haven’t), but because having had some time to think in detail about it over the weekend, it suddenly hit me what the real reason is that I intuitively knew all along that using expected value to decide whether to play the lottery or not is silly.
Let’s say, for the sake of argument and to make the calculations easier, that you’ll live long enough to play the lottery for the next 100 years. And that you are going to play it every single week. That means that you will be able to participate in 5200 lotteries before you die.
If you buy ten tickets a week, the maximum number of tickets that it’s possible for you to buy is 52,000. That means that the maximum you can “expect” to possibly pay for lottery tickets over your lifetime is 52 thousand dollars. So it really isn’t the “expected value” of the lottery that should be your focus, it’s that maximum of 52 thousand dollars. Your criteria for whether or not you play should be whether the prize is greater than that amount. I mean, it’s obvious, now that the reasoning finally crystallized for me: You’ll never spend more than 52 thousand dollars, so if the prize is more than that and you win, you’ll come out ahead.
If you buy a thousand tickets a week, the maximum you could ever spend would be $5,200,000. So, again, in that case it’s totally reasonable if your criteria for whether to play or not is if the prize is greater than five million two hundred thousand dollars. Any prize over that is guaranteed to be profit.
So even at a thousand tickets a week for a hundred years – which, come on, realistically is way way beyond what any normal person would spend on lottery tickets (heck, if you can afford a thousand dollars a week, do you really need to win the lottery?) – you’re still not all that close to the “expected value” of a lottery ticket.
So using $14 million as your criteria is silly, because you just can’t reasonably ever spend close to that much in the first place, so why care about it?
I never said I used $14 million as my criteria. In fact, I said precisely the opposite:
What I said was that the decision to play the lottery was based on many things, one of which is expected value. In particular, I said that three of the most significant factors are the expected value, the number of times you can expect to play the game (and whether or not that number gives you a “reasonable” chance of winning), and the funds required to play that number of times.
This seems fairly close to what you just posted. Maybe we’re in agreement now?
Saying you can’t draw conclusions from a sample size of one is not the same as saying expected value is literally meaningless in deciding to play the lottery a single time. Regarding the lottery, the analogous statement to Una’s would be that you can’t conclude anything about winning/losing the lottery based on having played it a single time–If you win, it doesn’t mean everyone wins; if you lose, it doesn’t mean everyone loses.
I’m giving up on this discussion, I don’t think any further progress is possible. To recap: it started when someone asked if it was reasonable to only play the lottery when the prize was more than $14 million. Someone answered yes it was, and I said no it wasn’t. You broadened that into saying that “the decision to play the lottery was based on many things”. You know, I’ve never taken issue with anyone’s personal decision making; use whatever you want, play the lottery only during a full moon for all I care.
But my first statement stands: It is not reasonable to use expected value and conclude that it’s only reasonable to play the lottery when the prize is over $14 million. This is GQ, and I’ve presented definitions, links, and a number of well-reasoned, math-based lines of argument to support my contention. I have repeatedly asked those who disagree with me for a straight answer to a simple question, a simple definition of a term, and some sort of rationale. I’ve seen none of that. No one has even directly rebutted my reasoning, you just keep saying that I’m wrong.
To be honest, I’m not 100% sure that either of us has ever understood entirely what the other was trying to say. Regarding your first statement in particular:
I’ve never disagreed with that. What I disagreed with was the claim that the expected value was completely meaningless in deciding to play a single lottery game. I’ve always claimed that it is an issue, but certainly not the only issue.
All I’ve been trying to do in my posts is give a very basic outline for developing a mathematical system for deciding to play the lottery. In other words, I’ve been trying to list the basic factors on which a rational decision to play the lottery can be based; one (but certainly not the only) of those factors is the expected value of the lottery ticket.
Ultimately, I guess the majority of this disussion originated from two things: 1. Your initial disagreement with the idea that “it’s only reasonable to play the lottery when it’s over $14 million”, and 2. From that point on I’ve been trying to get a basic formulation of when it is reasonable to play the lottery (and, once again, I agree it’s not the $14 million mark).
Perhaps the whole discussion would have flowed better had I said from the start, “You know, Roadfood, you’re right about the $14 million not being the reasonable cutoff point. I wonder if we can formulate a way to decide when it is reasonable to play such a game? I do think expected value will ultimately be a factor, but certainly not the only one.”
Anyway, sorry it got dragged out so long into this. No hard feelings, I hope?
Oh, definitely no hard feelings on my part. Like I said, for the most part I enjoyed the discussion, it’s just gotten to the point where I don’t have the energy to keep arguing against “rationales” that say a single trial is always part of a larger series.
To actually answer your question, yes I agree that in the real world, real people can and do use many factors in their own decisions to play the lottery, and many of those factors are perfectly reasonable. My issue was always with people who try to say that the one-and-only factor that divides a reasonable play for an unreasonable play is the strict expected value of $14 million.