I’m so out of my depth it isn’t funny, but Ill try. Taking Indistinguishable’s problem, all the prisoners have to do is guess a word. Any word. And it doesn’t have to be the same word. There are an infinite number of prisoners, but only a finite number of words. (Assuming, as I think is implicit, that the problem is posited in English or any language known to all the prisoners.) The stuff about lining up and being able to see all prisoners before them is misdirection. With an infinite set of prisoners guessing against a finite set of words, at least one of them has to get it right. Cake and a pony all 'round?
Okay, I’ve now got my head around it. I must say, the concept of “perfect logicians” is easier to understand when there are only a finite number of them.
Interesting bunch of problems. Maybe it’s a good time to end the hijack!

I’m so out of my depth it isn’t funny, but Ill try. Taking Indistinguishable’s problem, all the prisoners have to do is guess a word. Any word. And it doesn’t have to be the same word. There are an infinite number of prisoners, but only a finite number of words. (Assuming, as I think is implicit, that the problem is posited in English or any language known to all the prisoners.) The stuff about lining up and being able to see all prisoners before them is misdirection. With an infinite set of prisoners guessing against a finite set of words, at least one of them has to get it right. Cake and a pony all 'round?
PBear42, the object isn’t to get at least one right, the object is to get only a finite number wrong. So no cake or pony, sorry!

If this is confusing, consider an analogy: suppose I am granted money at random with a 1/2 probability of earning $2, a 1/4 probablity of earning $4, a 1/8 probability of earning $8, a 1/16 probability of earning $16, and so on.
What’s the expected amount of money I make? 1/2 * 2 + 1/4 * 4 + 1/8 * 8 + … = 1 + 1 + 1 + 1 + … = infinite.
I think this is either incorrect or as expected. If you can win only one of the sums (eg, flipping a coin and doubling the initial $2 every time it comes up heads), they can’t really be added, since the winning $16 also includes losing all the other opportunities (Unless you get to keep flipping, in which case you haven’t really won $16 yet, because you might win $32 instead)
If, on the other hand, you have infinite possibilites for earning money, isn’t it rather reasonable to expect infinite earnings?

I think this is either incorrect or as expected. If you can win only one of the sums (eg, flipping a coin and doubling the initial $2 every time it comes up heads), they can’t really be added, since the winning $16 also includes losing all the other opportunities (Unless you get to keep flipping, in which case you haven’t really won $16 yet, because you might win $32 instead)
If, on the other hand, you have infinite possibilites for earning money, isn’t it rather reasonable to expect infinite earnings?
You’re being a bit misled by “expected value”, which has a specific mathematical definition that’s a bit difficult to understand in certain circumstances. Suppose I have a game where I’m gonna flip a coin and give ya a cool million if it comes up heads, and absolutely nothing if it comes up tails. A fifty-fifty shot. The expected value is:
.50 (1,000,000) + .50 (0) = 500,000
That’s the mathematical definition, but even so, it’s impossible to get 500K from playing this game. It can never happen. Rather, the “expected value” is what would happen if you played an absurdly large number of times, and then averaged the outcomes. Which means you’d be perfectly happy to play this game a billion times, even if it cost you 499,999 to play it each time. Because on average, you would win one dollar every time you play. And this is, in fact, how casinos operate. They tilt the odds just a teensy-weensy smidge in their favor, and then they try to get as many people to play the game as possible.
With an infinite number of outcomes, however, “expected value” gets a little cloudier and tougher to digest. With Indistinguishable’s example, we have:
.50 (2) + .25 (4) + .125 (8) + .0625 (16) + … = Infinity
The expected value is infinity, but if you play the game exactly one time, you will without a doubt receive a finite number of dollars back. The tricky part is averaging an infinite number of attempts. Even though each outcome is finite, there’s still no limit to how much you can receive. Which means that if every dollar were worth the same to you as every other dollar, you’d be willing to play this game no matter how much it cost.
In reality, though, dollars have a diminishing marginal value which makes it extremely difficult for us to impartially compare the costs and benefits of an abstract theoretical probability question. If I gave you a 100% chance of winning a million dollars, or a 50% chance of winning 10 million, then most people would take the free money rather than the risk. One mil is life-changing money. The extra nine? Nice, absolutely, but I for one would rather have a guarantee of comfort than risk the flip of a coin. But if you went by expected value, you’d expect a different result.
100% chance: 1.0 (1,000,000) = 1,000,000
50% chance: .5 (10,000,000) + .5 (0) = 5,000,000
The expected value of the risky game is five times as much. But only a minority of people would take such a chance, because each dollar isn’t worth the same as the previous dollar. The first million is a helluva lot more valuable than the next four. That one fact allows a better understanding of a great deal of economic thought. This is why you’d never risk your life savings on the infinite expected value game, even though an infinite expected return would indicate that you should play regardless of the cost.

So, when they meet beforehand to plan their strategy, this is what they’ll do: for every possible winning set, they’ll pre-choose some particular guess-sequence from it. Once placed in line, the prisoners will each determine what the winning set actually is, and then make their guess in accordance with the pre-chosen guess-sequence for that winning set.
(bolding mine)
Could you or someone expand on how the guess-sequences are pre-chosen? It’s not trivial, and it’s not obvious (to me) that this is necessarily possible. There are an uncountably infinite number of possible head-sequences. I believe you will need to pre-choose an uncountably infinite number of guess-sequences. You have an infinite number of prisoners, but only a countably infinite number, so at least one of them needs to come up with an uncountably infinite number of them on his own.
A bit of an aside, maybe, but how large is the “winning set”? As large as the reals, or larger? What about the set of possible winning sets?

Could you or someone expand on how the guess-sequences are pre-chosen? It’s not trivial, and it’s not obvious (to me) that this is necessarily possible. There are an uncountably infinite number of possible head-sequences. I believe you will need to pre-choose an uncountably infinite number of guess-sequences.
Yes, this is true.
You have an infinite number of prisoners, but only a countably infinite number, so at least one of them needs to come up with an uncountably infinite number of them on his own.
You are misunderstanding how the choosing is done. It’s not that particular individual prisoners choose sequences; the team as a whole chooses an agreed upon representative from each winning set.
As an example of the same sort of thing, suppose the problem was this: The warden will announce a movie, and then ask every prisoner to name an actor in that movie, with the prisoners winning if they all pick the same actor. What strategy would the prisoners use? Well, in their meeting, they would go through every movie, and pre-select some particular agreed upon actor to name if that movie comes up.
The same sort of thing is being done here, except replace “actors in movies” with “guess-sequences in winning sets”.
A bit of an aside, maybe, but how large is the “winning set”? As large as the reals, or larger? What about the set of possible winning sets?
Given the assumption that there are a countable number of possible labels, every particular winning set is countably infinite. (Given any particular element of a winning set, all the others can be specified by their deviation from it, this deviation being describable by a finite string like “Yeah, it’s the same except that in the third position it says ‘aardvark’ instead, and in the fifth position it says ‘balloon’ instead”.)
It follows from this that the collection of all possible winning sets has the same size as the collection of all possible label-sequences, which is equal in size to the powerset of the naturals and to the reals.
[The above discussion is all within the context of classical mathematics equipped with the axiom of choice; in alternative systems of mathematics, things can be very different]
Hellestal: Looks like I was a bit too quick there. I know about expected value, but the infinite possibilites stumped me.
This calls for Qbasic. I wrote a quick one without any debug, but I think I got it right:
CLS
RANDOMIZE TIMER
FOR i = 1 TO 10000000
Earn = 1
DO
a = INT(RND * 2) + 1
IF a = 1 THEN Earn = Earn * 2
LOOP WHILE a = 1
IF Earn > 1 THEN TotEarn = TotEarn + Earn
Earn = 0
LOCATE 1, 1: PRINT TotEarn / i
NEXT
It’s a bit shaky at first, then it starts to rise steadily. After 10 mill, earnings per iteration was 14.8. (I hope I got the code right)

PBear42, the object isn’t to get at least one right, the object is to get only a finite number wrong. So no cake or pony, sorry!
Right. There’s also the problem that PBear42’s “Close your eyes and make any old guess you please” approach doesn’t even guarantee getting at least one right; for example, even if there are only two colors of hats, perhaps all the even-numbered prisoners guess black and all the odd-numbered prisoners guess white, but, actually, all the even-numbered prisoners have white hats and all the odd-numbered prisoners have black hats.

Hellestal: Looks like I was a bit too quick there. I know about expected value, but the infinite possibilites stumped me.
This calls for Qbasic. I wrote a quick one without any debug, but I think I got it right:CLS
RANDOMIZE TIMER
FOR i = 1 TO 10000000
Earn = 1
DO
a = INT(RND * 2) + 1
IF a = 1 THEN Earn = Earn * 2
LOOP WHILE a = 1
IF Earn > 1 THEN TotEarn = TotEarn + Earn
Earn = 0
LOCATE 1, 1: PRINT TotEarn / i
NEXTIt’s a bit shaky at first, then it starts to rise steadily. After 10 mill, earnings per iteration was 14.8. (I hope I got the code right)
Your code is erroneous; it has a 1/4 probability of earning $2, a 1/8 probability of earning $4, a 1/16 probability of earning $8, etc. It would also have a 1/2 probability of earning $1, except that you decline to actually count earnings if they are not greater than $1, so instead, it has a 1/2 probability of earning $0.
That having been said, the situation your code models, while not the one I described, is still one with an infinite expected value, in the technical sense.
“But it output 14.8!”, you say. “14.8 is nowhere near infinite!” Well, sure, it output a finite number; it’s not like it was going to output a little infinity symbol. Indeed, it’s not even at all surprising that your code output such a low number as 14.8: 10 million is very close to 2^23. Of 2^23 iterations, we would expect [I am now using “expect” in a non-technical sense] about half of them (or 2^22) to earn nothing; of the remaining 2^22 iterations, we would expect about half of them (or 2^21) to earn $2; of the remaining 2^21 iterations, we would expect about half of them (or 2^20) to earn $4, and so on, all the way down to about 1 iteration expected to earn about 2^22 many dollars. The total amount in 2^23 iterations, then, we could predict to be about 2^21 * 2^1 + 2^20 * 2^2 + … + 2^0 * 2^22 = 22 * 2^22. Dividing this by the approximately 2^23 many iterations we’ve run, we shouldn’t be surprised to see earnings of about 11 dollars per round.
But, the thing is, if you ran it for twice as may iterations, you’d expect to see higher earnings per round. It would never actually converge to some value; the more iterations you run, the higher earnings per round you’d expect to calculate. In fact, running the argument in general, when you go through 2^k many iterations, you expect to find amortized earnings of about (k-1)/2 per round. Thus, every time you double the number of iterations you run, you’d expect to see the earnings per round go up by 50 cents.
The point is, this never actually converges to some particular finite value, but slowly climbs to arbitrarily large ones as the number of iterations goes up. Thus, we call it infinite.
(I should make clear, almost every instance of the word “expect” above was in the non-technical sense. In the technical sense, even for just one iteration, the “expected” earnings are infinite. The point is, for this situation, in practice, dealing with pseudorandom number generators of limited granularity and thus clamping off the tails of our probability distributions beyond a certain fixed level of detail, we should predict such an infinite technical expected value to be reflected by non-convergent behavior such as “Doubling the iterations of this program increases the output by about 50 cents”, rather than by our programming outputting astronomically high values to begin with.)
[Also, the above argument was merely a back-of-napkin-esque reasoning sketch; I’m not imputing any particularly high significance to the formula “(k-1)/2” or resulting values “11” and “50 cents” and so on. I’m just hoping to indicate the flavor of how a more rigorous analysis might go…]
(Still, for what the back-of-napkin effort is worth, another way to see the result “Doubling the number of iterations normally increases output by about 50 cents” is by realizing that doubling the number of iterations corresponds to making about one more term of the sequence 1/2 * $0 + 1/4 * $2 + 1/8 * $4 + 1/16 * $8 + … “viable in practice”, and that each of these terms (except the first) is, of course, 50 cents. Actually, this result is of much more defensible significance than the particular number 11, which somewhat arbitrarily chose the cut-off point of resolution for “viability” to be where expected number of occurrences (in the technical sense) was 1.)
[Still, don’t read too much into this argument… The main point is that seeing a low number on the computer screen is not any kind of contradiction to the fact that the technical expected value is infinite]
By the way, if anyone has more interest in the 1/2 + 1/4 payoff game, it’s known as the St. Petersburg paradox.
Zenbeam’s point was kind of the same one I was making with my earlier less serious comment. I can perhaps be satisfied by supposing that it might be possible to simply have an algorithm that produces a correct guess-sequence for any given winning set (I think if the labels are orderable, it should be possible to only have to come up with an agreed-upon order in advance instead of all those solutions).
Another apparent paradox I see is this : Any given player looks down the line and thinks, “Hey, I don’t have to guess correctly - it’s up to all the other folks ahead of me to get it right. So I can guess anything I like!”
Or a related form: Each player has a finite number of possibly incorrect answers behind them, which can be ignored. Their only knowledge is what’s in front of them. Therefore they’re all essentially the same as Player 1. So why isn’t everyone’s guess the same as what Player 1 guesses?

[Still, don’t read too much into this argument… The main point is that seeing a low number on the computer screen is not any kind of contradiction to the fact that the technical expected value is infinite]
Sorry if I weren’t clear enough. I did notice that it rose steadily, and supposed the trend was towards infinity. Also, the reason the program doesn’t count ones is that all earnings start out as 1 so as to have a value to double, even if the coin comes up tails (if a = 2, ie. failiure, earn is still 1, but isn’t counted). And running a bit of debug, half of the tosses give an earning of 2.
Anyway, just to save my skin. I get your point, and I must say it’s intriguing.

Zenbeam’s point was kind of the same one I was making with my earlier less serious comment. I can perhaps be satisfied by supposing that it might be possible to simply have an algorithm that produces a correct guess-sequence for any given winning set (I think if the labels are orderable, it should be possible to only have to come up with an agreed-upon order in advance instead of all those solutions).
No, it’s not sufficient for the labels to merely be orderable in themselves (an ordering of labels doesn’t in any “explicitly specifiable” way induce an ordering on sequences of labels). Actually, there is no “explicitly specifiable” algorithm which selects a particular member from each winning set.
Tthis is why I’ve made the disclaimer that my solution relies on principles of mainstream mathematics such as the axiom of choice (which basically guarantees that arbitrary numbers of arbitrary choices can be made. That is, it says something like “If every question on a test has at least one correct answer, then there is at least one way to correctly complete the test (that is, to provide a single correct answer to each question)”. This might seem just obviously true to you, and, indeed, I suppose it is, on some conceptions of what abstractly defined functions are, but for one whose conceptions are related to notions such as “explicitly specifiable”, such a principle is often problematic (at least, outside of cases where the number of questions is finite): for example, suppose there was one question “Name an element of X” on the test for each non-empty set of reals X. Can you explicitly describe to me a complete solution to the test? I am quite confident you cannot. But if you embrace the axiom of choice, then you must be willing to accept that such a solution nonetheless exists…).
[We are on an extremely far-out tangent from the OP now. But I think that’s ok…]
Another apparent paradox I see is this : Any given player looks down the line and thinks, “Hey, I don’t have to guess correctly - it’s up to all the other folks ahead of me to get it right. So I can guess anything I like!”
Yes, that’s true in some sense. It would similarly be true of the following game: cake and ponies if all but finitely many prisoners pay the warden a dollar, destruction of the earth otherwise. Any given player can say “Hey, I don’t have to pay the warden anything; it’s up to all but finitely many other people to pay him. My unilaterally changing my decision won’t change whether we win or lose. So I can choose not to pay!”. And in a sense he’s right. But if no one pays, the prisoners lose.
Or a related form: Each player has a finite number of possibly incorrect answers behind them, which can be ignored. Their only knowledge is what’s in front of them. Therefore they’re all essentially the same as Player 1. So why isn’t everyone’s guess the same as what Player 1 guesses?
I don’t understand. Why would everyone’s guess be the same as what Player 1 guesses?
It’s true that the relevant information to be acted upon is the same for all players (the “germ” of the label-sequence; that is, equivalently, knowledge of what the winning set is). But, considering the goal, the players’ roles in acting upon that information are different. After all, the players aren’t trying to form a sequence where all but finitely many of them correctly guess what Player 1’s label is. They’re trying to form a sequence where all but finitely many of them correctly guess what their own label is.
If the goal was for prisoners to correctly announce their position in line instead of their label, would you ask why they don’t all say the same thing as Prisoner #1?
You are misunderstanding how the choosing is done. It’s not that particular individual prisoners choose sequences; the team as a whole chooses an agreed upon representative from each winning set.
No, I’m not. I’m saying that if they have to come up with an uncountably infinite set of possibilities when they are choosing the representative for each winning set, they can’t somehow parse it out to get only a countably infinite number of choices that each prisoner (or the group as a whole) has to make.
Now, in the rest of your response, you seem to be saying they only have to come up with a countably infinite number of representative sequences. I don’t think that is correct. Briefly, because I don’t have time for a longer post, there are uncountably many possible headword sequences, and I think the set of sequences that differ in only a finite number of words from a particular representative sequence will only be countably infinite. Thus, you’d need uncountably many representative sequences to cover all the possible sequences. I’ll give this some more thought.

Sorry if I weren’t clear enough. I did notice that it rose steadily, and supposed the trend was towards infinity.
Ah, ok. Good you saw it on your own, then.
Also, the reason the program doesn’t count ones is that all earnings start out as 1 so as to have a value to double, even if the coin comes up tails (if a = 2, ie. failiure, earn is still 1, but isn’t counted).
Yeah, I realized that, though it seemed like a roundabout odd way to go about it. (After all, in the situation I described, the player should NEVER earn no money at all). I would just initialize “earn” to 2 instead of 1, and get rid of the "IF earn > 1 THEN " guard. (Then your program would correctly model the situation I described)
And running a bit of debug, half of the tosses give an earning of 2.
Surely, you mean “half of the tosses where the player earns something give an earning of 2”. About half the time (when a comes out to 2 on the very first iteration of the do-while loop, before earn is doubled so much as once), the player earns nothing at all (yet you still count it as an iteration of the game).

Surely, you mean “half of the tosses where the player earns something give an earning of 2”. About half the time (when a comes out to 2 on the very first iteration of the do-while loop, before earn is doubled so much as once), the player earns nothing at all (yet you still count it as an iteration of the game).
Yeah, that’s what I meant, or I believe the odds in my program are the ones you gave originally, not half as good, as you supposed in reply. But shouldn’t failiures be counted? Isn’t it half a chance of winning 2, 1/4 of 4, &c, ie half a chance of not winning anything?
ETA: Ops, now I get it. It’s a 1 in 2^infinity chance of not winning anything, right? Right.
Anyway, I think we figured this one out. Time to get back to the original derail.

No, I’m not. I’m saying that if they have to come up with an uncountably infinite set of possibilities when they are choosing the representative for each winning set, they can’t somehow parse it out to get only a countably infinite number of choices that each prisoner (or the group as a whole) has to make.
I don’t understand what you’re saying here… There isn’t only a countably infinite number of choices that the group has to make. So what?
Now, in the rest of your response, you seem to be saying they only have to come up with a countably infinite number of representative sequences.
I am not saying that. There are an uncountable number of representative sequences which the group selects. What of it?
Briefly, because I don’t have time for a longer post, there are uncountably many possible headword sequences, and I think the set of sequences that differ in only a finite number of words from a particular representative sequence will only be countably infinite. Thus, you’d need uncountably many representative sequences to cover all the possible sequences.
Yes, that is correct. But what of it?
If the game were “The warden will announce a movie, and each prisoner has to name an actor from that movie. If all the prisoners name the same actor, they win”, would you have a problem with the strategy being “For each possible movie, the team agrees on an arbitrary pre-chosen actor to give in response”, even if there were uncountably many movies in the world? (For example, if movies were sets of real numbers, with the actors within them being their elements)

Anyway, I think we figured this one out. Time to get back to the original derail.
Yes, basically, but I’ll still answer your question.
Yeah, that’s what I meant, or I believe the odds in my program are the ones you gave originally, not half as good, as you supposed in reply. But shouldn’t failiures be counted? Isn’t it half a chance of winning 2, 1/4 of 4, &c, ie half a chance of not winning anything?
No… there’s no half a chance of not winning anything (how could there be? Combined with the half a chance of winning 2 and 1/4 chance of winning 4, your chances would add up to more than 1!). I’m not sure why you think there is. But it doesn’t matter.
ETA: Ah, I see you’ve figured out your confusion. Yes, in a sense, the chance of not winning anything is 1/2^infinity, because they are both 0. [A conversation about the non-synonymy of “impossible” and “probability 0” is probably about to erupt.]