Infinity Question

Does anyone want to respond to the argument I made in this post for Case IIIC,D? That was a situation like in the OP, except, whenever god was supposed to put in 2 chips (say, the 7 & 8) and take out 1 (the 4), he puts the 8 in, takes out the 4, relabels the 4 to be a 7, and asks you whether to put this relabeled 7 or a fresh 7 into the dish. Though the 2 chips look identical, Tyrrell McAllister suggested that, if you always choose the chip that had been in the dish, you will end up with infinitely many chips, while if you always choose the new chip, the dish will end up empty. Earlier, he brought up the idea of a chip being “tagged” for the whole process, so that identical-looking chips aren’t really identical if they have different histories. My argument against this historical tagging went as follows:

Consider any arbitrary move n. Whether you choose the chip that was in the dish or the new chip, there will be n chips in the dish after this move. As long as there are a finite number of chips in the dish, you do not need to worry about which choice you make, since you can make the number of chips in the dish end up infinite by always picking those chips that are in the dish after this move. So, whether the old chip goes back into the dish or the new chip goes in on this move is irrelevant. Since the move n is arbitrary, the choice of chip is irrelevant on any move. So, it makes no difference if you choose the old chip on every move or if you choose the new chip on every move. Since the number of chips at the end is infinite if you always choose the old chip, and whether you choose the old chip or new chip is irrelevant, the number of chips at the end is infinite if you always choose the new chip. And that is precisely the situation raised in the OP.

I’m not sure if this set is well-defined, but if it is, it looks like it should be empty.

The point of this, I assume, is that the set we’re interested in is always a subset of this set. That is true after any finite number of moves, but it doesn’t necessarily hold at the end. I could always make a discontinuity claim like you’ve been making against ZenBeam. The finite sets in case IIIC (where each move consists of adding a chip & relabeling a chip) are also subsets of the sets in this sequence, but it seems that everyone agrees that in IIIC we end up with a dish with infinitely many chips in it.

To erislover - we can’t just say that “god removes all numbers”, because that is what’s being argued. After the nth move, God has all the chips up to 2n, except he’s removed the bottom half (in the other case, he’d removed all the even numbers). Once the number of chips we’re dealing with becomes infinite, it doesn’t make sense to say that he’s removed the bottom half, so it’s hard to know what’s there & how many are there. (In the other case, it still made sense to say that he’d removed all the even numbers.)

OK, I have a few variations, and I want to see how this works for them.

Case One:

God creates two chips, identical in every aspect, and puts them in a very special bowl. The bowl instantaneously destroys one of the chips that enter it, every time. These events take place in zero time, and God schedules these events at the sequentially reduced intervals that will accomplish an infinite number of creation events in two hours.

How many chips?

Case Two:

God creates numbered chips, two sequential integers beginning with 1 and 2. God never repeats a number. The bowl always destroys the lower numbered chip that enters at each step. The devil is hiding in the bowl. Each time a new chip enters the bowl, he changes all the other chips by making each one show the next integer to the one currently on it. The devil can never destroy chips; he only changes numbers. Again, all these events take place in zero time, and the events are scheduled as before.

First God throws in 1 and 2. There are no chips in the bowl, so the devil has no chips to change. The bowl destroys number one, and number two is in the bowl. God throws in number 3 and 4. The bowl destroys chip number 3, and the devil turns the number 2 into a 3.

Now how many chips are there after two hours?

Are the chips still numbered sequentially? If they are not, why not? And if they are, isn’t one of them the lowest value integer in the bowl, despite the fact that all the chips in the bowl are numbered with integers which are infinite in value?
How is that different from the original problem?

Tris

knock knock

Why wouldn’t it be well defined? All that’s really happening here is the principle of induction on the natural numbers:

At the end, clearly 0 is not in the dish, since it’s removed in the first step. If n is not in the dish, it was removed at some step. After this step was taken, n+1 was the smallest chip in the dish, hence it was removed in the following step, implying n+1 is also not in the dish. By induction, no natural numbers are in the dish, and since only natural numbers are in the dish, the dish must be empty.

Are you saying something is wrong with induction?

Well, I just proved they’re both empty, so we won’t have to worry about that any more.

Finally, about your variations. Admittedly, I haven’t had time to look at each of them in detail (honestly, some not at all), but what seems to be going on in the ones I did see was a confusion over “chips” versus “natural numbers”.

In the original problem, there was a bijection between chips and numbers–every chip had exactly one number written on it, and every number was written on exactly one chip.

This is not true for your examples (or yours, Triskadecamus). Chips are getting their numbers erased, new numbers are being put on them, and so forth. We no longer have an direct correspondence between chips and numbers. We can talk about chips, and we can talk about numbers, but what’s true of one may not be necessarily true of the other in these examples.

With this in mind, is there a particular example of yours I should consider? (I still have a lot of work to do at the moment, I only come here when I take a break and don’t have time to review all your examples, so if you could point me to something specific, I’d appreciate it).

Cabbage - I’ll assume that the set in your example is empty. When I said “The point of this, I assume, is that the set we’re interested in is always a subset of this set. That is true after any finite number of moves, but it doesn’t necessarily hold at the end.” “the set we’re interested in” refers to the set in the OP. Is that what you’re referring to when you say you proved that they’re both empty? I agree that the set of chips from n+1 to 2n is a subset of the set from n+1 to infinity, and I think the set from n+1 to infinity goes to the null set as n goes to infinity, but that does not necessarily mean that the set from n+1 to 2n goes to the null set. There could be a discontinuity there.

The most important examples, I think, are IIIC and IIID (Tris’s example looks a lot like IIIC). You can look at my last post for an explanation this - in one case, there is renumbering, and in one case, there is not. I tried to argue that the case with renumbering (IIIC) ends up with infinitely many chips, and that the case without renumbering (i.e. with only replacement of chips - IIID - which is identical to the OP) must end up the same.

But the exact same set of chips are either in the dish (my example) or will be put in the dish (original example) in each case. To say that some are left in the original example but not mine is to say that some chips are removed in my example which were not removed in the original example, which is clearly not the case.

Let me go back to considering my example and the original example first, then I’ll get to yours.

As I mentioned earlier, one of the things that makes the original example special is that we have a direct correspondence between chips and numbers, so we can speak of them interchangeably. This is also true of my example. And in each of these examples, we can reason that, since all natural numbers have been taken out, all chips must have been taken out.

Let’s try a variation of my example (starting with all natural numbers in the dish and removing the smallest at each step). I’ll adopt your idea of the number being written on the chip in the form of whatever number of dots.

Now suppose that at each step, instead of removing the smallest chip, God simply adds a single dot to each chip. Let’s consider what happened–clearly no chip has been removed, yet clearly a number has been removed (the smallest number represented on a chip is no longer represented by any chip).

We’ve now lost the chip<->number correspondence, and must consider each individually–what happens to the chips may not happen to the numbers, and vice versa.

For the numbers: All the natural numbers will be removed from the dish, since, at some point, each natural number will be the smallest one left, hence it will be removed.

For the chips: No chips are ever leaving the dish. At the end, we’ll still have an infinite number of chips in the dish.

And each of the infinitely many chips will have infinitely many dots on them. No integers are left in the box, yet the chips remain.

Now, similar to your example, suppose that as each step is taken, you don’t know what God is doing. The dish goes from containing {3,4,5,…} to containing {4,5,6,…}, but you don’t know if chip 3 was removed, or if one dot was added to every chip.

In either case, certainly all the integers will be removed. However, it’s impossible to determine what will happen to the chips at the end, since you don’t have knowledge of whether we have a one-to-one correspondence between chips and numbers.

I think it should be clear now what I would argue in your examples IIIC and IIID. If God is replacing the chips (and not relabeling them) we again have the chip<->number correspondence. All natural numbers have been taken out, therefore all chips have been taken out.

On the other hand, if God is relabeling the chips (adding dots on them), all the natural numbers are still being taken out, but we end up with infinitely many chips, each with infinitely many dots on them.

And if we don’t know which God is doing, we don’t have enough information to conclude what will happen in the end to the chips (though in either case we know all natural numbers are removed, so we still know that).

You are focusing here on example 2 only. By itself, example 2 is a weaker argument than when it is combined with example 2B.

I assume the fly moves continuously for t<2. In neither example 2 or 2b have I needed to state that whether the fly’s motion is continuous for t<=2. Before I had example 2B, the answer that the fly is at 0 at t=2 was merely (one could argue) counterintuitive. With the addition of example 2B, I have a contradiction.

While I agree with this statement, I’m following Tyrrell McAllister’s proof without rearranging the series. My argument is not simply that rearranging the series can get the series to converge at different points. My argument is that the same scenario, merely by virtue of being described differently, yields two different answers. The fly ends up at two different locations at t=2.

This occurred to me reading your last post. Maybe its been answered previously, but it’s late and I’m tired:

What if God always writes an integer on each chip, in addition to the dots, and equal to the number of dots (writing the new number, then erasing the old if he adds a dot)? What numbers are written on those chips with infinitely many dots?

ZenBeam, really, the very simple answer is that Tyrrell McAllister’s applies to sets, not the sum of an infinite series. I’ve skimmed over your arguments, but it’s easy to get bogged down and lost in your notation; I would try harder were it not for the simple fact that, whatever you may conclude with the series you’re dealing with, it still has nothing to do with the original sets involved. How many times must I say it? Sets and series are two different things.

Anyway, as I said, I’m having trouble following your point in 2b. If you don’t mind, I’d prefer we start with your example 2, then work our way up to 2b if we have to.

If I insert, then remove, an element from a set, it’s gone. This is why, in the original example, the set winds up empty–every element ever inserted into the dish has been removed.

If I add, then subtract, a number from an infinite series, is it “gone”? Not necessarily. Your example 2:

1 + 1/2 - 1 + 1/3 + 1/4 - 1/2 + 1/5 + 1/6 - 1/3 +…

Your analysis of this series:

I assume you say this because of your Thereom:

This theorem doesn’t say anything about the sum of a series, it deals only with a set of the terms. I see no contradiction with this theorem and the fact that the series sums to log(2).

Here’s what can happen in series. Take our favorite:

1 + 1/2 - 1 + 1/3 + 1/4 - 1/2 + 1/5 + 1/6 - 1/3 +…

Rearrange it so the first terms are all positive for a while:

1 + 1/2 + 1/3 + … + 1/n

I don’t know what n is, let’s just say it’s really huge, like on the order of that bound for Graham’s number that Chronos mentioned. Now stick in

-1.

Now let’s continue with the positives again, say, for as many terms as we did the first time. Now stick in

-1/2

and continue with a huge string of the positive terms, and so forth.

Now if consider the set of moves and antimoves, as you were doing, all the moves and antimoves cancel each other out. The set of things that don’t get cancelled is empty.

That’s because, being a set, is doesn’t matter at what “rate” (whatever that means) things are being put in or taken out, or anything like there. Think of the set as a static object–either the element is there, or it isn’t, nothings “moving around” or going anywhere; it’s just a set, sitting there. Everything put in it was taken out, so it’s empty.

Conversely, it can be helpful to think of the series, in some sense, as being dynamic, as you were doing with the fly analogy. The rate at which things are coming now does matter, and clearly in the above reordering, the positives are coming at such a fast rate they are “overpowering” the negative terms.

Again, think of the set as just “sitting there”–if you remove everything it ever contained, it’s gotta be empty.

And think of the series as being dynamic, the rate at which the terms now (and only now) come becomes important, and even if things cancel each other out in the sense of the sets, the "antimoves’ may be spread too thinly to make a difference in the sum.

Oh, forgot this:

Well if he’s only writing integers on the dots, I’d say no number is written on the chips at the end–the chip just has the dots on it.

Cabbage has already explained (better than I would have) something I realized about case IIIC. I was at first thinking it actually was similar to the OP Case 1, but as he explained, you wouldn’t get the integers on the chips. So I apologize for any confusion I created over that.

One other thing about you argument, knock knock :

I may be reading this wrong. You seem to be basing the irrelevance of your choice the current move on the fact that you can ‘make up for it’ in the future, and then concluding that you therefore never have to make up your mind. It sounds like simple procrastination. You actually do have to choose at some point, otherwise the argument doesn’t work.

I also gave a slight amount of thought to the variation I brought up involving random choosing of method (which has only slight relevance to this). I think the expected number of chips would be zero now. Given an infinite number of trials, the prob. that chip 1 is in the cup goes to zero, and the same holds for the next highest integer. But then again you need an infinite number of trials for each integer for this to work (which unless I’m mistaken is an uncountably infinite number of trials), so I’m probably wrong again.

Here’s a thought…

Cantor had a proof that the real numbers had the same cardinality (a measure of infinity) as the set of all possible subsets (including infinite subsets) of rational numbers. It worked something vaguely like this:

First, you take all rational numbers, and order them. You can do this - it’s possible to put them into 1 to 1 correspondence with the natural numbers.

Now given any subset of the rationals, you can construct the following real number. You write it in binary. It starts 0. , then, the next digit is a 1 if the first rational number (in your ordering scheme) is in this subset, and a 0 if not. The next digit is a 1 if the second rational number is in the set, and a 0 if not. An so on…

A couple of examples here:

Let’s say your ordering of the rationals begins (0, 1, -1, 2, -2, 1/2, -1/2, 3, …)

The set {0, 1} is represented as 0.11 (binary), which is 3/4
The set {0,1, 2} is represented as 0.101 (binary), which is 5/8
The set of natural numbers would be an irrational number starting with 0.10100010…

The set of all rationals (which is the largest subset) would be represented by 0.111111… which is 1.0

The empty set would be represented by 0 (it’s 0.00…)

So, coming back around to this thread (you were wondering about that, weren’t you?) - the set from the first page where the smallest numbers were continually taken out would be:

0.000000…

Which is 0. The empty set. It has no elements. The bowl must be empty.

I’ve already responded to this.

I’m still thinking about this one.

This argument is why I specifed the new number be written first, then the old number removed. The chips never have no numbers written on them.
Tyrrell McAllister writes

Interestingly, I don’t think anyone has considered what happens if the dot-notation is applied to the OP. In the OP:

take “numbers written on them” to mean (or be replaced with) “numbers of dots marked on them”. Is the answer to the number of chips in the dish at t=2 different? I’d expect the answer not to be different, but since I’m not seeing eye-to-eye with several of you, I don’t feel I can assume everyone here agrees with that.

ZenBeam, this seems to me to be the major flaw in your argument:

A set is entirely defined by the elements it contains (Axiom of Extensionality), nothing more.

In particular, say I have two sets, one set P consisting of positive real numbers (just some collection, not necessarily all of them, in fact, let’s say it’s a countable set, considering what is to follow), the other set N consisting exactly of the negatives of the reals in the previous set. I define a new set:

X = {real numbers x : (x is in P but -x is not in N) or (x is in N but -x is not in P},

then indeed this set X is empty. A set is defined by the elements it contains, and we can easily demonstrate, for each real number x, that x is not in X, hence X is empty.

What happens when we try to form a series, where each term is coming from set P or set N? Let’s say the series uses all of the terms from P and N (we can do this, since I stipulated that both are countable).

Clearly we have a situation similar to yours–every “move” is counteracted by an “antimove”, and vice versa, by the way I defined P and N.

Can we then conclude the sum is zero (since a given move must always be counteracted by its antimove, and vice versa)? Definitely not. Any number of things can be a problem, but let’s focus on rearrangements.

(Now this is where the ordering becomes important).

As I mentioned in my previous post, imagine an incredible number of positive terms leading off the series, followed by a single negative, followed by another slew of positives, again followed by a single negative term, and so on.

Now (and I’m being somewhat vague since the details would depend on the specific series, and I’m trying to explain this intuitively), the positive terms are coming so quickly, some of them may not be “cancelled” by their corresponding negative term for quite a while. In fact, if we look at the partial sums, we will see that at all times, there are positive terms that have yet to be cancelled.

And how is it that we define the sum of an infinite series? That’s right, as the limit of the partial sums! And since no partial sum will consist of just “moves” and their corresponding “antimoves” (the negatives are coming to slowly), this property of the partial sums may very well get carried over into the limit. Hence, there’s absolutely no reason to expect that the infinite series will sum to zero.

Note, once again, that this is entirely different from the way the sets themselves behave. In that case, it was simply a matter of, “Well, this x is in P, let me look around and see if I can find a corresponding -x in N…Ah, there it is! Well, they cancel, so my set X is still empty”, and that’s all there is to it–order has no bearing on it.

Well, they have never have no numbers written on them so long as there are numbers left to write. What happens when we have passed through all the numbers, and there are no numbers left to write?

Yeah, I don’t think this is any different from the problem in the OP.

I’ve got a question that I was wondering about last night. For those who claim (in the OP’s problem) that the dish is not empty, one thing was never made clear to me. Do you claim that it contains integers? Or do you claim that it magically, somehow, contains “something else”?

If it contains “something else”, how can that possibly be so, when, by definition of the problem, God only puts natural numbers in the dish? Did the “something else” just, I don’t know, somehow materialize there?

To claim that the dish contains…“something else”…epitomizes the very notion of “hand waving”.

In response to knock knock’s Case II, described in this post, I wrote,

To this, knock knock replied

Sorry to have taken so long to reply to this. I still believe that there are uncountably many coins at 2 hours in this scenario. The reason is that the splitting of the coins over the course of the 2 hours produces an infinite binary tree, and the coins that exist at the 2 hour mark are the terminal nodes of that tree. Infinite binary trees have uncountably many terminal nodes because there is a bijection between the terminal nodes and real numbers written in base 2. In particular, a given terminal node may be described by the sequence of left and right branches that must be traversed to get to that node from the root. So, the node I’d get to by always going left would be 0.00000000…; the node I’d get to by always going right would be 0.11111…; the node I’d get to by going first left, then right twice, and then only left, would be 0.01100000…; and so on.

That this scenario seems isomorphic to the case in the OP so long as everything is finite is another good example of how things can go hay-wire when things become infinite, in the sense that what was irrelevant in the finite case suddenly dramatically affects the outcome.

ZenBeam has been trying to use the reasoning from my proof in this post to derive a contradiction. In his Example 2, he argues that my reasoning leads to one conclusion, while in Example 2b, he argues that the same reasoning leads to another contradicting conclusion. Therefore, he concludes, my reasoning is invalid.

It seems to me that there are several problems, in both of these proofs, outside of the portions borrowed from me. Only after both arguments are logically unimpeachable, except for those steps exactly mirrored in my proof, can we conclude that my proof is invalid.

Rather than detail every questionable point now, I will ask Zenbeam to clarify one point at a time, moving on to the next as each difficulty is resolved, until we reach a conclusion.

So, first of all, in ZenBeam’s account of Example 2b, he writes,

Here ZenBeam seems to be reasoning as though series had the following property:
[ul]Given a series [symbol]S[/symbol]a[sub]n[/sub], if some subset of the terms in [symbol]S[/symbol]a[sub]n[/sub] may be canceled with each other (ie, “compensate” for each other), and if the series [symbol]S[/symbol]b[sub]n[/sub] that results from removing this subset of terms from [symbol]S[/symbol]a[sub]n[/sub] is absolutely convergent, then [symbol]S[/symbol]a[sub]n[/sub] = [symbol]S[/symbol]b[sub]n[/sub][/ul]But this is certainly not true, as may be seen by comparing
1 + (-1 + 1) + (-1 + 1) + (-1 + 1) + . . .
= 1 + 0 + 0 + 0 + . . .
= 1 (absolutely)

and

(1 + -1) + (1 + -1) + (1 + -1) + . . .
= 0 + 0 + 0 + . . .
= 0 (absolutely).

To draw his conclusion, ZenBeam, must show just how my reasoning implies that this method of evaluating series is valid. But this seems highly improbable, because my argument never involved what it even means to perform addition on infinitely many numbers. The entire apperatice of limits, sequences, and complete fields must be defined to do this, all of which are far outside the scope of my proof. So since I made no claims about how series should be evaluated, it is hard to see how I made any wrong claims about how series should be evaluated.

The OP is not asking what is the set of chips in the dish. The OP is asking how many chips are in the dish. This is not a set, this is a series. I have been trying to use your set arguments to show a contradiction in a similar problem, and the response has been that I can’t use sets for that. Why should any of us believe you can use sets to obtain the answer to the OP? To use your own words, there’s absolutely no reason to expect that the infinite series will sum to zero.

Tyrrell McAllister writes:

Obviously this is untrue, yet that is what your argument is doing when showing that the number of chips is zero. You say you made no claims about how the series should be evaluated, but you did. You evaluated the series by canceling chips added with chips removed, then summing what was left (an empty set). Again, if this does not work on the fly, why should we have any confidence it works on the chips?

If it contains zero chips, how can that possibly be so, when, by definition of the problem, God only increases the number of chips at every step? I’ve addressed your question three times previously in the thread so I’ll let someone else answer.

Fine. By this same reasoning there’s no contradiction between the empty set of chips and an infinite number of them. What’s on the chips? Who cares? Maybe they’re blank. In response to this:

you were willing to accept blank chips:

So you shouldn’t have any difficulty with that.

OK, this whole post is kind of snarky. It’s late, and I’m not gonna spend the time to de-snark it. I’ll just apologize in advance.

ZenBeam, so the original problem isn’t a set, after all?

We’ve got a dish, which we can think of as the set. We’ve got the numbered chips, which we can think of as elements that either will or won’t be in the set at the end. And to finish it off, we have a cardinality function, which tells us how many elements are in any given set. This is all we need to answer the original question.

A couple of questions:

  1. Do you agree that modeling it as a set, the set is empty at the end?

  2. How would you justify that it’s not a set? Would you justify that it can’t be a set simply because the conclusions are counterintuitive and don’t meet your expectations? (I would certainly hope for a better justification than that).

ZenBeam’s series: log(2) = 1 + (1/2 - 1) + 1/3 + (1/4 - 1/2) + 1/5 + (1/6 - 1/3) + …

The problem you’ve shown with grouping terms only holds for series whose terms do not approach 0. Informally, the definition for the sum of a series is that a series sums to L if you can get as close to L as you want just by going far enough out in the series (i.e. adding up terms in a partial sum) and you stay that close once you go farther out). We know that the series for log2 converges, and that, if you’ve added up 3n terms in ZenBeam’s series you’ll have the same thing as if you’d added up 2n terms in the series for log2. So I can get as close as I want to log2 by going out far enough in that series, so, if I stick to looking at ZenBeam’s series after 3n terms (i.e. in chunks of 3) I can get as close as I want to log2. And, since all the terms are getting small, I won’t get too far from log2 on the 3n+1 and 3n+2 terms. So the series converges to log2.

Maybe you can argue against using series at all, or using any definition of convergence, but don’t worry about ZenBeam’s way of adding up the series.

I’ll get back to you a little later, Tyrrell McAllister, about your comments about my comments.

knock knock:

I thought I had killed this one already. ZenBeam’s argument was that since every positive term is eventually cancelled by a negative term (and vice versa), the series sums to zero. This simply isn’t true. That’s a rearrangement, and I’ve already adressed the problem with doing that, in my first post on this page. And my argument does specifically apply to ZenBeam’s series above. In fact, I can rearrange that series so that it doesn’t converge at all, so what’s the point?

No. I don’t believe your argument is valid.

The answer the OP is looking for is not a set, and I shouldn’t have said is was a series. It’s a value. The problem expressed as a series would gives us the value if only the series were convergent, but it is divergent. You try to get around this by expressing the problem in terms of sets. You believe that by using sets, you aren’t rearranging terms, and can therefore do this (I hope I’m not putting words in your mouth here, but I really don’t think you’ll disagree with that statment). I believe what you are doing is as invalid as rearranging the terms of a series, and is likely equivalent to doing so. I base this on my examples 2 and 2B, where I express the problems as sets and run into the same errors as I would run into rearranging terms in a non-absolutely converging series.

I’m not certain precisely the issue causing the difficulty in expressing the problem in terms of sets. Regardless, the contradictory solutions to examples 2 and 2B are sufficient for me.

This just occured to me as I was writing the above paragraph. I’ll confess I haven’t given it much thought, but I’ll throw it out here anyway. Perhaps the issue with using sets is that the sequence of sets of chips obtained at each step does not converge to your final set (and in fact does not converge at all). You’ve requested I show continuity between finite and infinite series. Can you (or anybody) show continuity between the sequence of finite sets at each step, and the final, infinite (or empty) set?

The point of my argument is that this is what your and Tyrrell McAllister’s arguments using sets are doing. Let me be clear here: I understand full well that this is not allowed in summing series which are not absolutely convergent. I am following the set arguments attempting to show the dish is empty when I do this. As I said above, I believe what you are doing is as invalid as rearranging the terms of a series, and is likely equivalent to doing so.

My point all along has been, why would it have to converge or be continuous in the first place. All I’ve done is merely investigate the sets and let them speak for themselves.

There’s an analogous thought experiment I’ve been tossing in my head for a few days now, which may be worth bringing up. Somehow, to me, this experiment makes the conclusion somewhat more intuitive, and I’m hoping that it may do the same for you.

We have a village, like any other village, except this village has a (countably) infinite population. The people of this village are numbered, 1,2,3,4,… No person has two numbers, no number is on two different people, everybody gets a number, every number is on somebody, blah blah blah. What I’m saying is there is a 1-1 correspondence between the numbers and the people.

Anyway, the people of this village live in houses, like most villagers do. And this village has curfew at midnight–everyone must be in their house at midnight. Let’s even say that every night, exactly at midnight, a dragon swoops through the town, breathing fire over the entire village, killing everyone in sight. Which, of course, would explain why their houses are fireproof).

On one particular night, at 10PM, persons 1 and 2 go out to enjoy the weather, or whatever. At 11PM, persons 3 and 4 also go outside, while person 1 goes back home. At 11:30, 5 and 6 go out, while 2 goes home. And so on…I think we all know the setup by now.

As it gets closer and closer to midnight, more and more people are coming outside. But watch any particular person, say the person numbered N. You will notice that 1/2[sup]N-1[/sup] hours before midnight, person N heads home. The same is true for everyone else.

I can even imagine a census taker, having all the people line up in the morning. As he goes down the line, he asks each person what time he came home last night. Each person tells him “I made it home before midnight and was safe from the dragon”. The census taker checks off every one on his list.

“All present and accounted for”.

How many people broke curfew, and were out at midnight? Was anyone killed by the dragon? And…dare I ask it again…who, if anyone, was killed?

As Cabbage has pointed out, the OP is asking for the cardinality of a set. Set cardinalities may not, in general, be found using series. Before using series to find this cardinality, you must first justify that it is valid to do so in this particular case. At the very least, this will require making some kind of continuity argument about how the cardinality of the set changes as a function of time.

I think that you are confusing two distinct usages of the word “add”. On the one hand, we say that we are “adding something up” when we are counting the elements of (*i.e.*finding the cardinality of) a set. On the other hand, we say that we are “adding” when we perform the binary operation of addition on some elements of R, the set of real numbers. These two notions of “adding” are different. The first case may be applied to sets of arbitrary size, and the “adding” here is done using bijections and the other tools of set theory. In the second case, the machinary of limits and so forth must be brought into play to perform this kind of adding on a nonfinite number of elements of R.

Your wrote.

(bolding added)
This is an example of the confusion I referred to above. I summed no series; I evaluated cardinalities.

I will try to give another example of how these two notions of “adding” are different. Consider the subset S of R below:
[ul]{ -1/1 , 1/2 , -1/3, 1/4, -1/5 , . . . }[/ul]If I “add up” the elements in this set using the first meaning, I will get a certain cardinality (aleph-0), and I will get this answer no matter what order I count them in. But if I “add up” these element by evaluating a series whose terms are these elements, I can get any answer whatsoever, depending on the order of the terms in the series. So, again, “adding up” can mean two very different things.

You wrote

This is just one of the bizarre phenomena that can occur when things get infinite. Consider my set S above. Suppose I removed all elements of this set, one after the other, at decreasing intervals so that I’ve removed every element after two hours. Then no matter what order I do this in, the set that results at 2 hours will be empty. “But,” you might say, “how can that possibly be so, when, by definition of the problem, you have infinitely many elements in S at each step?” What can I say? It may be unintuitive, but it is the logical conclusion.

knock knock wrote

I still don’t think the principle works in general, for the reasons Cabbage has been pointing out. For example, a series converging to 1 may be constucted whose terms are elements in the set
[ul]S = {1/n: n in N} union {-1/n: n in N}[/ul](Since the series converges, the terms tend to zero, and so satisfy your additional condition.) Now “interleave” this series with some absolutely convergent series, like 0+0+0+… to construct the series [symbol]S[/symbol]a[sub]n[/sub]. Though the terms in S cancel (in the sense that for each 1/n, there is a compensating -1/n), the value of [symbol]S[/symbol]a[sub]n[/sub] is different from the value of 0+0+0+… that results from the removal of the terms in S from [symbol]S[/symbol]a[sub]n[/sub].