Maybe I didn’t phrase the question clearly. Let me try again:
Suppose that each day the randomizer chooses a number from 1 to n, and if it’s 1, then the light goes on- but you don’t know the value of n which has been built into the machine.
If the light went on on the first day, you’d assume- in a very non-rigorous way- that the value of n was probably pretty low. Sure, it could be the case that n=10^100, and it just happened to choose 1 on the first day, but that’s unlikely.
If the light hasn’t gone on after a million tries, then yes, it could, theoretically, be the case that n=2 and it just so happened that “the coin landed tails” one million times in a row, but that’s unlikely. It’s more likely that you’re dealing with a large value of n.
If you know the value of n, things are different. If you flip a coin, then you know n=2. Even if it lands tails a million times, the probability of the next toss being heads is 50%.
But if you don’t know the value of n, and after a million tries the light never goes on, is any sane person going to bet even money on the next trial?
The question then is, if the only information we have is the fact that after x trials, the light never went on, then what can we determine about the probability that it will go on on the next trial? It seems to me that as more trials pass, it gets more and more likely that the value of n is large, and so you’re less and less able to expect a success on the next trial. But how does one rigorously quantify that commonsense expectation? What, precisely, is the probability of a success after x failures?
-Ben