How many lottery tickets must be bought to assure every combination of numbers?

Sorry I haven’t been back here before now. I’ve got a few ideas to write (some of which have already been touched on in various forms), so I’ll go ahead and spell them out and see what people agree or disagree with.

Admittedly, some of this stuff I’ve never given much thought to until this discussion. I mean, I knew about expected values and that sort of thing going in, but I’d never given much thought regarding basing lottery “strategies” (i.e., to play or not play) on it before.

Anyway, you’ve made some good points, Roadfood. Here are some thoughts that occur to me:

First consider a game that costs, say $1 to play. Initially, let’s say this game offers a 100% chance of winning a net of $1 (you pay $1 and immediately get $2 back). You’d be a fool not to play. The expected value of your winnings in this game is $1.

Now alter the game. You pay $1 and have a 50% chance of winning $4 back. Do you play? What if instead it’s a 1% chance of winning $200?

Or, in general, what if the $1 game gives you a 1/n chance of winning 2n dollars (where n is some real number >= 1)?

The relevant feature of all these games is that they each give an expectation of winning a net of $1.

Specifically, would you pay $1 to have, say, a 1/googol chance of winning two googol dollars? I certainly wouldn’t; my chances of winning are so small that I’m virtually guaranteed of losing my dollar, even though the expectation of the game is that I would actually win one dollar.

So here we have a class of games in which the expected value of winning is the same for ($1) every game, yet it certainly seems that some games are good deals, while others are bad.

This raises the question: At what point do these games switch from being good deals into bad deals? How large does the probability of winning have to be before it’s a good decision to play?

I’ll say a bit more on this later.

Another (extreme) example, similar to one Roadfood mentioned earlier:

Two people, A and B, play a game. We’ll say that B has only a 1/googol chance of winning, while A wins otherwise (there are no ties).

When A wins, B pays $1,000,000 to A.

When B wins, A pays one googolplex dollars to B.

If you played this game, would you rather be A or B? The expected value of winning is overwhelmingly in B’s favor. On the other hand, the odds of winning, period, are overwhelmingly in A’s favor.

I would certainly play in A’s position, in spite of the expected values. Even playing a lifetime’s worth of games it’s doubtful in the extreme that I would ever lose a single game.

On the other hand, if A and B were immortal beings, playing the game over and over again throughout eternity, it’s clear that player B will kick the shit out of A in the long run, provided B has enough funds to survive the initial drought of wins he will most certainly experience when they begin playing the game in the first place.

Now back to the earlier question regarding the class of games I described earlier:

At what point do these games switch from being good deals into bad deals? How large does the probability of winning have to be before it’s a good decision to play? (Or maybe it’s not a single “point” dividing good and bad games, maybe a better description is that the expected value slowly becomes less and less relevant as the probability of winning decreases)

I’ve tried to illustrate what I think are two of the main factors in answering this question by giving the example of the game between A and B–time and funds:

Any game from that class (in which you have an expectation of winning $1) is worth playing when 1. You have enough time to play the game often enough to give you a “reasonable” chance of winning occasionally, and 2. You have enough funds to cover the cost of those games, as well. (I’m going to leave “reasonable” undefined).

In this sense, an immortal being with unlimited funds (who, for whatever reason, likes to win money anyway) would consider any of the games from that class to be a “good” game.

I understand that the immortal being is only a theoretical concern, however. The initial situation was whether or not it was a good deal to play a “single” game. At this point I would claim it’s a good deal when the individual considering the game thinks there is a “reasonable” chance of winning that single game. (I’ll assume paying the single dollar won’t be a problem here). 100% chance of winning? Obviously a good deal. 1/googol chance of winning? Obviously a bad deal (to me).

As I mentioned, I’m going to have to leave “reasonable” undefined here. I don’t think there are going to be any hard and fast rules by which you can define it; it is in large part up to the individual considering the game. Still, as I’ve mentioned before, it would be a function of (among possibly other things) how well you can afford that single game, and how much benefit the winnings would give you.

Ultimately, I’m not saying that the expected value is necessarily irrelevant when considering a single game, but I am saying that it becomes less relevant as the chance of winning the game decreases.

Anyone agree or disagree with any particular points?

So what? Yes, you don’t end up with the expected value. Yes, it is abstract. Why is this a problem? Expected value is a measure. If the expected value of your lottery ticket is higher than it’s cost it means your ticket is worth more than the money you gave for it before the result of the random trial. Of course, intrinsic value is relative, so like others said if you are starving the ticket will cost you substantially more than it will to a billionaire.

But I see your point. I think an example might help. Consider this (only one winning ticket):

Game 1:

  • Odds: 1 in 100,000. Payout: §20,000,000. Ticket price: §1.

Game 2:

  • Odds: 1 in 10. Payout: $20. Ticket price: $1.

Even though Game 1 has a higher expected value, you are more likely to profit from Game number 2 in a single trial. But in a way, Ticket 1 is “cheaper”.

This is simply not true, he might win.

OK, fair enough.

I do, however, believe that the “expected value” has meaning, even in a single trial, but with a caveat. In my previous post I tried to explain the way I tend to think about it, which is to consider a function which measures, basically, how happy I am, as a function of my assets. This function is a nonlinear, increasing (but mostly convex, and presumably bounded) function. The point is that what people are trying to do is not be rich, precisely, but be happy. So what we are trying to do, in this view, is maximize the expected value of this happiness function. Now for very small changes in total assets, we expect this function to be pretty flat, and we can approximate it by a line. Under these conditions, the expected happiness is linearly related to the expected monetary return, and we can maximize our expected happiness by maximizing the expected return.

This seems to me to be pretty straightforward, though from some of your comments I’m not sure if you agree even with this. I would, for example, pay $1 for one roll of a six-sided die if I got $10 for a six, even if I had only a single trial. Most likely I lose $1; but my expected return is positive. Since both $1 and $10 are small, this maximizes my expected happiness as well.

But when the stakes get larger, the curvature of this happiness function becomes important, and the expected monetary return doesn’t directly reflect on the expected happiness gain. I wouldn’t, for example, pay $1000 for a single D6 roll with $10000 for a six, even though it is just the above game multiplied by 1000. A richer person (on a flatter portion of his happiness curve) might.

But in this view, what happens when the paybacks and the odds-against get outlandishly large is not that the odds are unreasonable; what happens is that a very large monetary payback confers only slightly greater happiness than a moderately large one. Odds that are too large cannot be reasonably overcome by correspondingly large payoffs, because as the monetary payoff increases without bound, the resulting happiness is bounded.

So, that’s the way I look at it. I believe there has been some experimental work with people trying to determine the shapes of these curves, but I don’t have any cites.

I know I shouldn’t keep posting on this, but it’s starting to bug me. I mean, this is the Straight dope General Questions, the place to go for answers. And Chronos, if I interpret that “SDSAB” under you name, that means that you’re on the Straight Dope Science Advisory Board.

So educate me, please. If I’m wrong in my belief that the concept of expected value has no rational meaning in a single trial, explain to me why I’m wrong, and tell me what the meaning of expected value is in a single trial. What does expected value tell you about a single trial?

One more little example, using my card game. Forget about any money involved for the moment, let’s just say you’re going to play the game once for no money to play, and no prize. The odds are 1 in 13 that you’ll draw an ace. What do you expect the outcome to be? What would you predict as the outcome? What is the most likely outcome?

That you won’t draw an ace, right? Can we all agree, at least, that the most likely outcome of one play of the game is that you’ll draw some card other than an ace?

So assuming (I hope) that we agree on that, does bringing money into it change that most likely outcome? If the theoretical expected value is much larger than the cost of a play, if it’s twice the cost or if it’s ten billion times the cost, the most likely outcome is still that you won’t draw an ace, isn’t it? So how on Earth does it make sense to play when the most likely outcome is that you’ll lose, regardless of what you paid to play or what the prize is, or if you can easily afford the cost or if the prize is large enough to have a major positive effect on your life? I just don’t get how the expected value has any bearing on the single trial.

Chronos, how, really now, can you rationally say that you’d be a fool not to play a single trial when you know that the most likely outcome is that you’ll lose? Educate me, explain it. What is the theoretical/statistical basis? And please, no more of this stuff about the relative value of money or that you can afford to lose a dollar and can put the $1000 in the bank. That doesn’t speak to the theoretical basis of using expected value in a single trial.

Yes, we agree on that.

Because obviously the reward is a significant factor too in a player’s decision. Another example:

Game X: Odds. 1 in 10. Payout: A quintillion dollars. Ticket price: $1.

You are likely to lose. Would you play?

Right, so what you said is perfectly reasonably as your own personal view and your own personal approach to the decision-making of various “bets”. But don’t you see that it’s a purely emotional basis? I mean, you yourself used the word “happy”, which is emotional, isn’t it?

But my dispute with the claim that expected value has meaning in a single trial is strictly in the mathematical/statistical realm. That’s why I regret going off on all these examples, because I think all that’s done is to steer the discussion away from the crux of the matter.

Forget the examples, forget the specific games and the specific amounts of money, forget whether losing a dollar is meaningful to you, or whether winning a trillion trillion dollars is more meaningful than losing your life’s savings. Let’s stay strictly in the mathematical, statistical, theoretical realm.

Now, in that realm, of pure math and statistics, what does “expected value” mean in a single trial? In a large number of trials, I believe I can define it. It is the value of a win divided by the odds against winning. So a prize of $1000 with odds of winning at 1 in 10 gives an expected value of $100. Is that right? But isn’t another way of saying it that it’s an average of the win amounts over the total number of plays? If the odds are 1 in 10, and I play ten times, it’s most likely that I’ll win once and lose nine times. The one win gets me $1000, the nine losses get me zero, but the average of one $1000 and nine zeros is $100. If any of this is wrong, please correct me.

Based on that, I see that, in some large number of trials, the odds will do their job, so to speak, and on average, I’ll win once for every ten times I play.

So now when it comes to the decision as to whether it’s “good” to play or not, comparing the cost of one play to the expected value of one play makes sense. If, on average, I can expect to win $100 per play, then if it costs me less than $100 per play, I can expect, on average to win more than I pay. If the cost of a play is, say, $50, then playing ten times will cost me $500. I will likely win one of those ten plays, winning $1000, making me $500 richer.

And, of course, the more times I play, the greater the likelihood that my actual winnings will equal the average expected value of one play.

But isn’t the opposite of that last sentence that the fewer times I play, the less likely it is that my winnings will equal the average expected value of one play? Is there a fault in my reasoning here? The lower limit in the number of plays I can make is one. And in a single play, the odds say that most likely I will lose. So if the decision is now whether or not it’s “good” to play just once, the odds indicate quite clearly that it’s not good; one play will most likely lose. The expected value is an average. An average over one is certainly mathematically sound, it’s just the value divided by one. But what is the statistical meaning of an average over one? On average, I’ll win once every ten plays. So does that mean that, on average, I’ll win one tenth in one play? Well, I guess so, mathematically, but since I can’t actually win a tenth of a play, it seems like it doesn’t make sense to apply the average to a single trial.

I mean, flipping a coin will come up heads, on average, half the time. Does that mean that flipping it once will come up with half a head? Of course not, that doesn’t make any sense. So how does the expected value – which is the average over a large number of trials – make sense in one trial?

So that’s my reasoning, strictly in the mathematical realm – no emotion, no happiness quotient, nothing about the relative values of the costs and prizes – for why I believe that expected value has no meaning in a single trial.

Chronos, can you now explain, staying in the mathematical realm with none of that other stuff, why I’m wrong?

I have realized that all of these “Would you play?” questions serve mostly to obscure the issue, rather than help it.

The right question is not “Would you play?” but rather “Is there a statistical justification for playing?” In the real world, if I was offered a single play of the above, of course I’d risk the buck. And, most likely, I’d lose it, too. But is there a statistically valid justification for playing? In other words, do the statistics indicate that it’s a “good” play or a “not so good” play? I say that it’s the latter. I say that statistically your odds are one in ten, and the “expected value” of a hundred quadrillion dollars has no meaning, statistically, in a single trial. Emotionally, as a factor in my human decision to play or not? Sure. But in purely mathematical/statistical terms? No.

[Damn, I wish I’d stop hitting submit and then thinking of what I should have said.]

Pedro (and Chronos): Remember that this started in talking about the lottery. The claim was made that it makes sense to buy a lottery ticket when the prize is over $14 million, but it doesn’t make sense when the prize is less than $14 million.

I disputed that. If you’re going to play the “would you play?” game, then let’s go back and apply it to that. Would you play, at 1 in 14 million to one odds, risking one dollar to win $13 million? Gosh, it wouldn’t hurt me at all to lose a buck, and $13 million would be really handy! Oh but, darn, the expected value is less than that measly old dollar, so it doesn’t make sense to play.

Now the prize is $15 million. Expected value is greater than the dollar. It now makes sense to play!

If that’s really the way you think, then explain the logic to me. The dollar has the same low value in both cases. The prize is highly significant to you in both cases. The odds against winning are the same ultra-longshot in both cases. What, you’re going to tell me that the difference between winning $13 million and winning $15 million is significant?

If I do the “Would you play?” thing to those two cases, I come up with the same answer in both, there is just no difference between them. And yet one has a negative expected value and the other has a positive expected value, clearly indicating that one is a bad play and the other is a good play. Makes no sense.

I don’t see how this is purely an emotional issue like you said. Clearly you recognize that reward is significant in your decision, right?

I pushed my examples to extremes to make the point obvious. If I was offered to buy a lottery ticket that had an expected value of say 2% or 3% over its cost I probably would not play, because playing the lottery isn’t really my thing. This decision is not a true or false, yes/no proposition. Statistics and probability alone cannot give you the absolute answer to your question that you seek.

I wanted to avoid the “1+1=2” cliché and I ended up bungling this sentence but I think you know what I meant.

Ok, to me, those two sentences contradict each other. If it’s not emotional, then it’s based on statistics and probability. But if statistics and probability alone can’t give the answer, then the rest of the answer must be coming from the emotional side.

The answer I seek is to the question: how does the concept of expected value have meaning in a single trial? If statistics and probability alone cannot give the answer, then you are agreeing that within the realm of statistics and probability, expected value has no meaning in a single trial, aren’t you? If it did have meaning within that realm, then the answer could be found within that realm. And if it’s not found within that realm, then what realm is it found in? Have I missed something?

Hmm no, it might be based on practical considerations for example.

I think I provided some examples where it did have usefulness as a quantifier of risk vs reward. I don’t know how to make my case any better.

But if those “practical considerations” aren’t based in statistics, by what definition are they practical?

Ok, then let me once again try to make my point clearer, because I’m allowing myself a bit of hope that we might actually be getting close to a meeting of the minds here.

As a general principle, I agree that risk vs. reward is a meaningful criteria for a decision. I do that myself. When it comes to playing the lottery, I think we have at least a general agreement that the risk – a few dollars – is small and the potential reward – millions of dollars – is very large, and that the odds of winning are very small.

So the general principle – low risk, high potential reward, even though low odds of winning – is something I have no disagreement with.

The problem I have is when people try to further split that principle by what I say is a misapplication of the concept of expected value. In other words, people who say that " low risk, high potential reward, even though low odds of winning" somehow makes sense only when the expected value is positive. Making that distinction in something like the lottery, where the number of trials you can reasonably participate in is so incredibly small relative to the odds of winning is just nonsensical. How does that distinction have any meaning when you can only play a few hundred trials, and the odds against winning are 14 million to one?

Situation #1: the lottery prize is $13 million, ticket’s cost a buck. Does that fit the " low risk, high potential reward, even though low odds of winning" scenerio? So on a risk vs. reward basis, it makes some sense to play.

Situation #2: the lottery prize is $15 million, ticket’s cost a buck. Does that fit the " low risk, high potential reward, even though low odds of winning" scenerio? So on a risk vs. reward basis, it makes some sense to play.

But now you apply expected value to those two situations, and you conclude that #2 is better than #1? That makes no sense at all. Why? Because what expected value really tells you is how much you could “expect” to win, statistically, IF you could play enough times to be reasonably assured of winning. You would need to play the lottery 14 million times before you would, statistically, have an even chance of winning. So, again, IF you were going to play 14 million times, it would be perfectly rational to say: “14 million plays will cost $14 million dollars, and so it’s only a good deal if the prize I win is more than $14 million”.

But since you can’t play 14 million times – you can’t play even a tiny fraction of 14 million times – “expected value” tells you nothing useful. It’s only a useful, practical, meaningful concept when the number of trials is statistically significant. How can you claim otherwise? How can you look at a statistically insignificant number of trials, and then claim that expected value is statistically significant?

The actual, practical, real-world risk-vs.-reward is no different when the lottery prize is $13 million than when it’s $15 million.

If you disagree, explain why the risk-vs.-reward is different in those two situations. Explain why expected value is statistically significant when the number of trials is statistically insignificant.

I don’t want this to sound like a rant, but I’m getting really frustrated. I know that Chronos has yet to weigh in today, but I’m getting frustrated at continuing to ask for rational explanations, and not getting any. If you “know” that I’m wrong here, you should be able to explain it in more detail, in more mathematical/statistical terms, than just saying “I don’t know how to make my case any better.” I’ve tried hard to make my case on a mathematical/statistical basis, using my best knowledgte of that. I don’t see any refutations on the same basis.

I gave this some thought and I don’t see any holes in the math, so what’s your take on it?

I think we disallow infinite expectations and say that this random variable does not have an expected value, which is the same as saying it is not a useful concept in this situation. The pseudoparadox makes sense though if you think about it in terms of the mass distribution/fulcrum analogy. If I keep loading an infinite beam with ever incresing loads, the fulcrum will be located at infinity. Also, if you extend this to multiple trials, there is no insight to be gained about how much to pay to play, so the problem isn’t single trial.

So I don’t think this detracts from my previous posts. It’s certainly interesting nevertheless. Correct me if I’m wrong, you are far more knowledgeable than me on this subject.

Or another way to put it, the random variable is not centered so it has no expectation. I’d like to hear your opinion on this ultrafilter.

Well, look, Roadfood. We are all, presumably, humans here. We are finite, emotional beings; our goals in life are presumably not the sort of abstract rational goals that are easily quantified. I used the word “happy” as an attempt to boil down all of the goals of a human to a single one-dimensional quantity. These might represent raw, animal, emotional joy; or deep intellectual satisfaction; or something else entirely. But you are asking how a human (you) ought to play, which means you have to define your goals somehow. I’m just using a word “happy” to represent whatever those goals are. If you prefer, you can use the more generic term “utility”; I avoided that just because it is a more generic term.

In my previous post (#103) I tried to explain that I think there’s a reasonable definition of “happiness” (or “utility” or “value” — understand all of these to be synonymous technical terms, standing for some quantified personal notion of the values of stuff) which makes more sense to use than simple monetary amounts. But when the values are sufficiently small, using money is a reasonable approximation; maximizing expected happiness is the same as maximizing expected monetary gain. My point was that the large values of money that are being thrown around here — $1 googol here, $1 googol there, and pretty soon, you’re talking real money! — even apart from being economically meaningless, cannot be linearly related to the low ticket prices, because humans place different values on different amounts of money. I don’t think that this is irrational; earlier I tried to give a few reasons (and Chronos, in post #95, gave other examples). This is not an unknown idea in economic theory, by the way. I’ve avoided, and will continue to avoid, talking specifically about the shape of this happiness curve, because it’s clearly pretty complicated; it’s different for every person, for example, and it changes over time. Also, although I think there has been research done on it, I don’t know anything specific about it, so anything I could say would be even more uninformed speculation than this post already is. But even qualitative understanding of this idea does allow for explanations of a lot of the questions you’re asking.

But why, you ask (and ask and ask :)), is “expectation” the correct concept for a single trial? Why is the correct concept not just whether you are more likely to win or lose?

You have already said that your personal decision process, whatever it is, accounts for some notion of expected value:

So you’re basically arguing that everyone — not just everyone else, but you too — would behave irrationally, given a gamble with that payout; sort of odd.

But let’s look at this decision process. I look at this game and see a ticket price of $1. A loss of $1 represents a very small loss of happiness to me. And for that very small pain, I have a 10% chance of getting everything money can buy me — a very large gain in happiness. Why is this tradeoff irrational?

Is it never rational to make a small sacrifice for a chance at an extremely large gain? First, any human’s life is made up of a large number of choices (a human lives for about 20 billion seconds), many of which have unpredictable outcomes. You might not ever play exactly the same game again, but you’ve already played and will again play games with similar win probabilities. Ever throw a long pass in football? Ever kick on goal in soccer? Swing hard in baseball? These are all low-success-probability events. Now this might be an arguable point, if your life really did consist of a single Bernoulli trial (all other events following deterministically from its outcome). At that point, I think it differs enough from real life that it becomes a philosophical argument.

How about your two contrasting lottery games (each with 1/14E6 probability of winning; one with a payoff of $13E6 and the other with a payoff of $15E6)? You say, intuitively, that there’s no important difference between these payoffs, so for a single trial we should be willing to play the $13E6 game if we are willing to play the $15E6 game. The way I try to explain this is that, for you, $N+$13E6 and $N+$15E6 (where $N represents your current assets) have very similar happiness values; the values of your expected happiness for the two games are very nearly equal. Perhaps your threshold is just higher: maybe you’d play the game if the payoff were $100E6 or $1000E6. On the other hand, maybe no payoff is large enough for you. This is not impossible. I said earlier that the happiness function h($x) (where $x is your assets) is probably bounded; if h($infinity)-h($N) < 14E6 (h($N)-h($N-$1)) then for your happiness function no monetary payoff can overcome these odds.

On the other hand, if we accept a nonlinear, bounded “happiness” function, then the expected happiness is bounded, giving us a finite happiness cost that we’d be willing to pay. (Even without my own personal nonlinear utility of money, there are pretty sound economic reasons to truncate the sum well before, say, 1 sextillion dollars, a sum of money that I think is utterly meaningless in today’s economy.)

And still, even in your post, have yet to see anything that even remotely approaches an answer.

No, I have said quite the opposite. I think I’ve made it clear that, in my personal decision making process, there is no difference between a lottery when the prize is $13 million (negative expected value), and a lottery when the prize is $15 milion (positive expected value).

It’s not irrational, it’s just not based on expected value. I mean, come on, read what you wrote up there. The price to pay that you might lose, the chances of winning, and the amount you might win. No expected value in sight.

Again, you’re arguing against something I’ve never said. Please try to understand this: my point from the very beginning has been about the specific concept of “expected value”, not about whether it’s rational to try for a large gain. My issue is not whether it’s rational to make a small sacrifice for a chance of a large gain, but whether it’s rational to use a concept that only holds true for a large number of trials to make a decision about a single trial. Do you see the difference?

But see, you’re just wrong here. I’ll try to say it one more time: I claim that it’s just nonsensical to even talk about the “values of your expected happiness for the two games”. Because in a single trial, the “expected value” – be it of money or happiness or whatever – just doesn’t have a meaning that is useful.

Again, the issue I have is not with the specifics of whatever equation you want to try to come up with, it’s that any such equation has no useful meaning in a single trial.

I mean look, here’s an analogy that most likely will only confuse things more, but I’ll try anyway.

Game #1: You put down a dollar. I flip a penny. If it comes up heads, I give you two dollars, if it comes up tails I keep your dollar.

Game #2: You put down a dollar. I flip a fifty-cent piece. If it comes up heads, I give you two dollars, if it comes up tails I keep your dollar.

Which game is better? “Well,” I say, “obviously game #2 is.”

“But why?” you ask, “They look the same to me.”

“But a fifty-cent piece is worth more than a penny,” I answer.

“But,” you reply, “The value of the coin being flipped has no effect on the outcome, it’s meaningless.”

“Ok,” I try to explain, “Maybe your threshold is just lower. What if we make it a twenty dollar gold piece instead of a fifty-cent piece?”

“That doesn’t change anything,” you argue, “It still has no effect on the outcome.”

I know this is not exact, it’s an analogy. But that’s what I see you doing. I’m saying that expected value is meaningless for a single trial, it has no effect on the outcome. And rather than explaining why I’m wrong, you’re telling me that maybe it’s just because my threshold is different? I claim that the concept just doesn’t apply, so trying to explain it in terms of different values still doesn’t make it apply, or tell me why you think it should apply.

In your paragraphs up there talking about my two contrasting lottery games, you didn’t make any attempt to explain how the two games are or could be different, in any sense. If they are, in fact, different to you, tell me what the difference is.

Wow, this thread really grew in the past day… But I think I can give an argument why expected value is meaningful for a single trial.

Let’s take the ace game again, the one where a player pays $1, and if he draws an ace, he wins a grand. Roadfood, I think we are in agreement that if you can play the game as many times as you’d like, you should, correct? Over many plays, the likelihood is overwhelming that you’ll come out ahead.

Now, suppose a guy (who we’ll assume you know to be completely reputable and honest, of course) comes up to you on the street, and offers you a single chance to play this game. You refuse, since it’s only a single play. Then another guy comes up to you, and makes the same offer. Again, it’s only a single play, so you turn him down, too. All day as you walk down the street, you keep on getting such offers (hundreds of them, before the day is done), and you turn down each one, since each one is only a single play. But what you’ve just done is turn down the chance to play the game many times. If a game is not worth playing once, then it is not worth playing many times, since even if you’re playing repeatedly, you’re still only playing one game at a time. Contrapositively, if a game is worth playing many times, it’s worth playing once.

Please note, by the way, that I’m not saying that it’s idiotic to not play the lottery whenever the expectation is positive, and nor am I saying that it’s idiotic to play when the expectation is negative. In a lottery, the prizes are generally large enough and the odds long enough that a number of less tangible considerations come into play, such as the real value function for money, the enjoyment from playing the game, etc., and these considerations will vary from person to person. Personally, I don’t play the lottery, and I don’t even play small-stakes, even-money games like quarters, because I don’t find them enjoyable. But if one’s goal is strictly to earn money, and one considers n dollars to be worth exactly n times as much as 1 dollar for all n, then expectation is the way to go.

[Ah, once more into the breech…]

Ok, to everyone who believes that I’m just being stubborn, or that I’m hopelessly dense or ignorant, or that I’m just being argumentative for it’s own sake, let me make a distinction clear.

If I was asking for a definition of “expected value”, I could go to Google and type in “define: expected value” and get lots of factual answers. In fact, I did that and here are a few:

That was the one that came out on top. Please notice how it says “in the long run.”

Some more (in the order that Google presented them):

Please notice how it says “over an infinite number of samples”.

Please note how it says, “if it is repeated a large number of times.”

There are lots more, go try it yourself.

Now then, aside from the obvious fact that many of those definitions clearly and unambiguously support my claim that “expected value” is only meaningful in a large number of trials, they are also all clearly factual answers to the question “what does expected value mean?”

But the question that I’ve been asking over and over is, “What does expected value mean, with respect to a single trial?”

If there has been a posting on this thread that answered that question in a factual way, in a way that even remotely resembles the manner of the answers I got from Google to the more general question, cite the post number and I’ll apologize from now until April.

If not, then I’m still waiting to see such a factual answer to my simple question. Of course, I will settle for a straightforward admission that, as is clearly evident from some of the definitions above, there just is no meaningful answer to my question, which is all I’ve ever claimed.