Balls and gremlins puzzle

Can anyone explain to me why both of the following couldn’t be true at the end of the OP’s procedure?:

  1. There are no balls in the bucket labeled with finite numbers.

  2. There are an infinite number of balls in the bucket.

What assumption or axiom does this violate? Furthermore, clearly, this is the result of the procedure.

I’d rather not read through a four page thread. Can’t you just explain in your own words how it makes sense?

The whole set cannot be presented in finite time, but every element can. No element of the urn set can be determined in finite time.

Consider the following restatements of the problem:

A: On the first step, the gremlin writes the following:

1
0

then writes

2 1
1 0

then

4 3 2 1
1 1 0 0

then

6 5 4 3 2 1
1 1 1 0 0 0

etc

B: On the first step, the gremlin writes
0

then

10

then

1100

then

111000

etc.

What will the last number he writes down be?

ZenBeam:
How about I take your idea even further. What if there’s only one ball. After each step, the gremlin relabels the ball one higher. After an infinite number of steps, clearly there’s going to be a ball in the urn, because the nubmer of balls never changes. But what number is on the ball?

It violates the condition that all the balls are numbered. Each of the balls is supposed to have a number on it - none of the balls has a lazy-eight infinity symbol written on it, so if there are no finite-numbered balls in the bucket, that condition is violated. So (I maintain) there cannot be an infinite number of steps because it would violate that condition.

First, I owe an apology to ZenBeam. This comment:

was out of line. My apologies.

Now, let’s get down to the meat of this. I assume that everyone reading is familiar with basic set theory. If not, I suggest you familiarize yourself with basic set theory before continuing.

Ready?

The problem as I see it is that we have a sequence of sets of natural numbers (B[sub]0[/sub] = [symbol]Æ[/symbol] and B[sub]n[/sub] = {k | n + 1 < k < 2n} for n > 1), and we want to know how they behave as n goes to infinity (i.e., as n gets larger and larger).

When we want to examine the behavior of a sequence of numbers as the index increases without bound, we use limits. And that’s exactly what I propose we do here. Now, there is no generally accepted set-theoretic definition of a limit that I know of, so I had to invent one. After three or four tries, I found a good one. Let’s examine it.

Let <A[sub]n[/sub]> be a sequence of sets drawn from a universe U. We’ll say that k [symbol]Î[/symbol] U is eventually present in <A[sub]n[/sub]> iff there exists a natural number N[sub]k[/sub] such that k [symbol]Î[/symbol] A[sub]n[/sub] for n > N[sub]k[/sub]. Similarly, we’ll say that h [symbol]Î[/symbol] U is eventually absent from <A[sub]n[/sub]> iff there exists a natural number N[sub]h[/sub] such that h [symbol]Ï[/symbol] A[sub]n[/sub] for n > N[sub]h[/sub].

To save me some typing, let’s use a [symbol]Ù[/symbol] <A[sub]n[/sub]> to mean “a is eventually present in <A[sub]n[/sub]>” and a [symbol]Ú[/symbol] <A[sub]n[/sub]> to mean “a is eventually absent in <A[sub]n[/sub]>”.

With those definitions in mind, let’s introduce the meat of our discussion. For A [symbol]Í[/symbol] U, we say that limit(A[sub]n[/sub]) = A iff two conditions hold:[ol][li]For every a [symbol]Î[/symbol] A, a [symbol]Ù[/symbol] <A[sub]n[/sub]>.For every b [symbol]Ï[/symbol] A, b [symbol]Ú[/symbol] <A[sub]n[/sub]>.[/ol]I think this corresponds pretty closely to how you’d intuitively expect a convergent sequence of sets to behave: it has to “get closer” to its limit in some sense. This is that sense.[/li]
Now, we immediately need to prove one thing: if limit(A[sub]n[/sub]) = A and limit(A[sub]n[/sub]) = B, then A = B.

That’s pretty straightforward. If A [symbol]¹[/symbol] B, there are three possibilities: A [symbol]Ì[/symbol] B, B [symbol]Ì[/symbol] A, or A - B and B - A are both non-empty.

If A [symbol]Ì[/symbol] B, then B - A is non-empty. Pick an element b from there. Since b [symbol]Î[/symbol] B, b [symbol]Ù[/symbol] <A[sub]n[/sub]>. Since b [symbol]Ï[/symbol] A, b [symbol]Ú[/symbol] <A[sub]n[/sub]>. That’s pretty clearly a contradiction, I think.

The case B [symbol]Ì[/symbol] A is almost exactly the same. A - B is non-empty, so just pick an element a out of there, and apply the same analysis.

If B - A and A - B are both non-empty, then B - A is non-empty, and we can use the preceding analysis again.

In any case, if limit(A[sub]n[/sub]) = A and limit(A[sub]n[/sub]) = B, then A = B. We refer to an operator like this as well-defined: if you give it the same input in different forms, it produces the same output.

With that under our belts, let’s go ahead and consider the sequence in the OP. Remember, it’s B[sub]0[/sub] = [symbol]Æ[/symbol] and B[sub]n[/sub] = {k | n + 1 < k < 2n} for n > 1. We will show that limit(B[sub]n[/sub]) = [symbol]Æ[/symbol].

For any k [symbol]Î[/symbol] [symbol]Æ[/symbol], k [symbol]Ù[/symbol] <B[sub]n[/sub]>. This is true because there is no k [symbol]Î[/symbol] [symbol]Æ[/symbol], so you can’t find a counterexample.

For any h [symbol]Ï[/symbol] [symbol]Æ[/symbol], h [symbol]Ï[/symbol] B[sub]n[/sub] when n > h. Therefore, h [symbol]Ú[/symbol] <A[sub]n[/sub]>.

By the definition of limit(), limit(B[sub]n[/sub]) = [symbol]Æ[/symbol]. It’s ridiculously counterintuitive, but there you have it.

This is getting to be a long post, so I’ll go ahead and submit. Then I’ll argue why I think this is a good general notion of convergence for sequences of sets.

All right, comfortable with my last post? Good, let’s show some interesting things about my limit operator.

Let’s assume that limit(A[sub]n[/sub]) = A, and limit(B[sub]n[/sub]) = B. Then we can show the following:
[ol]
[li]limit© = C[/li][li]limit(A[sub]n[/sub][sup]c[/sup]) = A[sup]c[/sup]limit(A[sub]n[/sub] [symbol]Ç[/symbol] B[sub]n[/sub]) = A [symbol]Ç[/symbol] B[/li][li]limit(A[sub]n[/sub] [symbol]È[/symbol] B[sub]n[/sub]) = A [symbol]È[/symbol] B[/li][li]limit(A[sub]n[/sub] - B[sub]n[/sub]) = A - B[/li][li]limit(A[sub]n[/sub] [symbol]Ä[/symbol] B[sub]n[/sub]) = A [symbol]Ä[/symbol] B[/li][li]limit(A[sub]n[/sub] [symbol]´[/symbol] B[sub]n[/sub]) = A [symbol]´[/symbol] B[/li][li]limit(P(A[sub]n[/sub])) = P(A)[/ol]Just in case you’re not familiar with a couple of those operators, [symbol]Ä[/symbol] is the symmetric difference operator, [symbol]´[/symbol] is the cartesian product operator, and P is the powerset operator.[/li]
I have chosen to sketch the proofs rather than go through in full detail because it’s late and I’m tired. I can give more detail, but I think they’re straightforward enough that you can verify them as they are.

  1. limit© = C. This one’s pretty easy. Every element of C is in every term of the sequence, so it’s eventually present. Every element outside of C is not in any term of the sequence, so it’s eventually absent.

  2. limit(A[sub]n[/sub][sup]c[/sup]) = A[sup]c[/sup]. Well, if a [symbol]Î[/symbol] A[sup]c[/sup], then a [symbol]Ï[/symbol] A, so a is eventually present in <A[sub]n[/sub][sup]c[/sup]>. Likewise, if if a [symbol]Ï[/symbol] A[sup]c[/sup], then a [symbol]Î[/symbol] A, so a is eventually absent in <A[sub]n[/sub][sup]c[/sup]>.

  3. limit(A[sub]n[/sub] [symbol]Ç[/symbol] B[sub]n[/sub]) = A [symbol]Ç[/symbol] B. Assume a [symbol]Î[/symbol] A [symbol]Ç[/symbol] B. Then a [symbol]Î[/symbol] A, and a [symbol]Î[/symbol] B. This implies that a is eventually present in both <A[sub]n[/sub]> and <B[sub]n[/sub]>, which implies that it is eventually present in <A[sub]n[/sub] [symbol]Ç[/symbol] B[sub]n[/sub]>. If z [symbol]Ï[/symbol] A [symbol]Ç[/symbol] B, then either z [symbol]Ï[/symbol] A or z [symbol]Ï[/symbol] B. In either case, z is eventually absent from <A[sub]n[/sub] [symbol]Ç[/symbol] B[sub]n[/sub]>.

  4. limit(A[sub]n[/sub] [symbol]È[/symbol] B[sub]n[/sub]) = A [symbol]È[/symbol] B. This is easy. A [symbol]È[/symbol] B = (A[sup]c[/sup] [symbol]Ç[/symbol] B[sup]c[/sup])[sup]c[/sup]. The conclusion follows immediately from 2 and 3 above.

  5. limit(A[sub]n[/sub] - B[sub]n[/sub]) = A - B. A - B = A [symbol]Ç[/symbol] B[sup]c[/sup], so by 2 and 3 above, we’re done.

  6. limit(A[sub]n[/sub] [symbol]Ä[/symbol] B[sub]n[/sub]) = A [symbol]Ä[/symbol] B. This is another easy one. A [symbol]Ä[/symbol] B = (A - B) [symbol]È[/symbol] (B - A). We get the conclusion from 4 and 5 above.

  7. limit(A[sub]n[/sub] [symbol]´[/symbol] B[sub]n[/sub]) = A [symbol]´[/symbol] B. Now we’re done with the easy ones. If (a, b) [symbol]Î[/symbol] A [symbol]´[/symbol] B, then a [symbol]Î[/symbol] A and b [symbol]Î[/symbol] B. So a is eventually present in <A[sub]n[/sub]>, and b is eventually present in <B[sub]n[/sub]>. This implies that (a, b) is eventually present in <A[sub]n[/sub] [symbol]´[/symbol] B[sub]n[/sub]>. If (a, b) [symbol]Ï[/symbol] A [symbol]´[/symbol] B, then a [symbol]Ï[/symbol] A or b [symbol]Ï[/symbol] B. In either case, (a, b) is eventually absent from <A[sub]n[/sub] [symbol]´[/symbol] B[sub]n[/sub]>.

  8. limit(P(A[sub]n[/sub])) = P(A). This one is kinda tricky. Recall that S [symbol]Î[/symbol] P(A) iff S [symbol]Í[/symbol] A. So if S [symbol]Í[/symbol] A, then for every s [symbol]Î[/symbol] S, s [symbol]Î[/symbol] A. That means that s is eventually present in <A[sub]n[/sub]>. Since every member of S is eventually present in <A[sub]n[/sub]>, S is eventually present in <P(A[sub]n[/sub])>. If S [symbol]Ë[/symbol] A, then there is some s [symbol]Î[/symbol] S - A. That means that s is eventually absent from <A[sub]n[/sub]>, which implies that S is eventually absent from <P(A[sub]n[/sub])>.

These properties make me think that my limit operator is a good choice in general. I would like to find a proof that limit(A[sub]n[/sub] [symbol]®[/symbol] B[sub]n[/sub]) = A [symbol]®[/symbol] B (recall that A [symbol]®[/symbol] B is the set of all functions from A to B), but that’s hard. Maybe I’ll get there eventually.

Brain hurt. Hulk sad.

Math fights are one of my favorite quirks of the Dope. :slight_smile:

I just want to pop in and point out that there already is a definition of the “limit” of a sequence of sets. The “limsup” or “upper limit” of a sequence A[sub]n[/sub] is the set consisting of all points which lie in infinitely many of the A[sub]n[/sub]'s. The “liminf” or “lower limit” of the sequence A[sub]n[/sub] is the set consisting of all points which lie in all but finitely many of the A[sub]n[/sub]'s.

For example, if A[sub]n[/sub] is the set {+1} when n is even and {-1,+1} when n is odd, then the upper limit is the set {-1,+1}, while the lower limit is the set {+1}.

If the upper and lower limits of a sequence of sets are the same, then that sequence is said to converge to the common limit. (The upper limit always contains the lower limit, but they may not be equal.)

This definition is, I think, equivalent to the definition that ultrafilter has posted in the cases where the limit exists. Specifically, an element is “eventually present” if it lies in the upper limit, and “eventually absent” if it doesn’t lie in the lower limit.

Note that with this definition, the limit of the sequence of sets {}, {2}, {3,4}, {4,5,6}, {5,6,7,8}, … (i.e. the sequence of sets which describes which balls are in the bucket after each step) exists and is in fact the empty set. If we change the way the gremlin removes the balls…for example, if we require the gremlin to remove the highest numbered ball instead of the lowest…then the sequence of sets in question becomes {}, {1}, {1,3}, {1,3,5}, and so forth. This sequence also has a limit, but this time the limit is the set of all odd numbers.

Regarding ZenBeam’s objection: I would contend that the act of relabelling the balls when they’re outside urn A does in fact change the problem, even though nothing has changed “from the persective of the urn”. Let’s suppose just for kicks that the balls have a second number written on them in invisible ink. Initially, these new numbers are sequential, in order, and in place before the experiment begins, but the new numbers don’t get changed when the balls are placed in urn B.

Then the sequence of sets in urn A, according to the second, fixed, labelling, is {}, {2}, {1,3}, {3,2,4}, {2,4,1,5}, {4,1,5,3,6}, {1,5,3,6,2,7}, and so forth. In other words, according to the numbers written in invisible ink the balls in urn A at the end of minute n are the numbers from 1 to n except for f(n), where f is a function which assumes every positive value an infinite number of times. This sequence of sets has no limit by the definition above: the upper limit is the entire set of positive integers, the lower limit is empty. Every ball passes from urn A to urn B and back again an infinite number of times.

Why the difference, when everything looks the same from inside urn A? (The new numbers are, after all, invisible.) This is why: as soon as the situation is changed to allow the relabelling of the balls, an observer “inside” urn A no longer has enough information to determine which ball is being removed at each step, because such an observer cannot see what’s being relabelled! And the exact choice of which ball is being removed at each step definitely affects the outcome, just as in the case where the gremlin removes the highest numbered ball instead of the lowest.

In short ZenBeam, I’m afraid your scenario doesn’t describe the same situation as that originally proposed.

What I want to know is (and I’m serious about this): why is this even debatable?

I don’t know the answer. I have a hunch, and since I’m typing anyway, I’ll include it below, but in fairness, I have no solid math background. But this strikes me as something that there clearly is an answer to, and it’s likely indisputable. And not only that, I suspect that the field of study this relates to in mathematics isn’t even new or controversial or anything. This isn’t string theory; it’s something that’s been discussed for centurys.

So… Is everyone here just doing like me and posting their hunch without any real experience? Why can’t some math major say “oh yeah, we covered that back in junior year; the answer is X”?

And now my hunch, which carries the extensive weight of nothing:

At minute n, the remaining balls are numbered from n+1 to 2n. And the total balls in the urn are 2n - (n+1) = n-1. So, as n approaches infinity, the number of balls in the urn (n-1) approaches infinity, and so does the lowest numbered ball (n+1). The fact that there are no finitely numbered balls in the urn doesn’t mean there are no balls in the urn.

2*infinity = infinity. The group pulled out is infinite and the group remaining is infinite.

I am (or was) a math major, and we did cover the definition of the “liminf” and “limsup” of a sequence of sets in, I believe, fourth year.

**

All the balls are finitely numbered. If there are no finitely numbered balls in the urn, what other balls could possibly be in the urn?

Are you reading the same thread I’m reading?

ultrafilter, well, I’m looking at thread id #184084. Maybe I missed your point.

Orbifold, my hunch is quickly changing. You’re saying that watching the “infinity” count on the balls isn’t the real thing to be counting (if counting is the right word). Instead it’s the “infinity” count of the time taken by both the adder of balls and the gremlin. The gremlin takes twice as long to remove them, but as n approaches infinity, so does 2n. I.e. for every ball that goes in, one comes out, it’s just that they come out half as fast. Which would be a problem if we were talking finite times, but as both times have the luxury of approaching infinity, they both are the same.

Well, after writing that whole rambling post I realized that I got this part backwards. An element is “eventually present” if it lies in the lower limit, and “eventually absent” if it doesn’t lie in the upper limit. My apologies to anyone still reading…

Let me slightly rephrase that for precision:

For every ball that goes in, that same ball comes out, it’s just that they come out half as fast. Which would be a problem if we were talking finite times, but as both times have the luxury of approaching infinity, they both are the same, and therefore every ball comes back out.

True?

I would just shoot the gremlin.

Sorry, I was just a little irritated when you insinuated that everyone is just posting hunches. It took me about two hours to type my longer posts in this thread, and probably about twice that much time to come up with the ideas and details in the first place. Not exactly a hunch, wouldn’t you say?

ultrafilter wrote

You certainly put a lot of time and effort into your posts, yes. I wasn’t intending to be insulting.

I didn’t think you meant anything by it. No hard feelings?

Methinks that there will be infinitely many balls in the bucket.

The argument that the balls in the bucket will not have any meaningful labels on them doesnt hold IMHO, especially because the problem states that you have already an infinite supply of balls that are numbered. So there are already labels on them that will approach infinity. This means that if you argue that there wont be any balls in the bucket in the end because they would have to be labeled you dont even need to start because there are already infinitely many labeled balls which in your opinions cant exist.

IMHO the solution is as already pointed out:

balls_in_bucket = lim(t->inf) (2*t - t) = inf

missing_link, the difficulty is, what is written on the remaining balls? Or even one of them?