Indistinguishable said:
Have you got a cite for that? Not that I don’t trust you, but I’d like to be able to point to something other than “some guy on the internet”. 
Indistinguishable said:
Have you got a cite for that? Not that I don’t trust you, but I’d like to be able to point to something other than “some guy on the internet”. 
You don’t need a cite – just work it out as long division.
I don’t need a cite for “by the above definition of how mathematicians use the language of infinite decimal notation”?
A cite is not an unreasonable thing to ask for. Will Wikipedia do? I’m having a surprisingly hard time finding anywhere else on the Internet that defines how to interpret infinitary decimal notation. I was hoping MathWorld would, but it doesn’t. (Speaking as a mathematician, it is interesting to observe that, in my experience, we all use Wikipedia quite a bit and never bother with MathWorld anymore, anyway… That having been said, Wikipedia has the problem that, on a “contentious” issue like this, it is liable to be dismissed as lacking proper authority)
(To clarify, the only cites I can potentially produce will be for the fact that the definitions in post #48 do indeed describe what professional mathematicians mean when they standardly use the notation of infinite decimal sequences. I can’t, of course, produce a cite for anything much stronger than that; in particular, I can’t produce a cite that says “You’re not allowed to consider any different kind of system with different rules”, since, of course, you are allowed to; you just wouldn’t be speaking about the same thing anymore…)
[Also, the only cite I can give right now requires one to take the Wikipedia definition at the top of this article, combine it with the Wikipedia definition of limit, and apply various bits of reasoning to show it equivalent to the more elementary formulation I gave. I could make it more direct, but at the cost of making the counterargument “Hey. You could’ve just inserted that into Wikipedia right now!” meritful…)
If you know how to do long division, you don’t need a cite, because you can work it out for yourself. Just try dividing 1 by 3 and you’ll see that every digit comes out as “2 is too small, 4 is too big, and 3 is the Goldilocks number”.
This isn’t even a special issue of “what mathematicians mean by .3333(3),” unless you start to wander in some funky alternate mathematics, or you extend the issue to the basic question of what any decimal fraction means, or what positional notation means in any and all cases.
Suffice it to say that if .3333(3) doesn’t exactly equal 1/3, and .9999(9) doesn’t exactly equal 1, then calculus doesn’t work, Achilles can never catch up with the tortoise, and the universe most likely can’t exist.
This could be taken to demonstrate that 1/3 cannot have a decimal representation with a digit which is smaller or larger than 3. But that’s not enough; those who object that “0.3333…” is not exactly equal to 1/3 often don’t contend that it has some other decimal representation instead.
One perfectly well could, in fact, give a non-standard interpretation to decimal sequences on which 0.3333… would not be interpreted as 1/3. [For example, it seems to me many of the objectors are trying to get at an account something like “An infinite decimal sequence represents the rational number in the internal logic of the topos (Set^N modulo the free nonprincipal filter on N) given by truncating the sequence to Omega many digits, where Omega is the nonstandard natural number generating this topos”]. This just wouldn’t be the standard interpretation. It’s not unreasonable to ask for a cite regarding the (extramathematical) facts of what the standard interpretation is.
(“the free nonprincipal filter” was a poor way to refer to this; “the Fréchet filter” or “the cofinite filter” would be better wording.)
As I said, “funky alternate mathematics”. Legitimate in their own right, but not the plain-vanilla axiom set of continuous real (or sometimes complex) numbers that, to 99% of people, are the only mathematics, apart from, perhaps, about a one-week tutorial on set theory.
It’s not intrinsically funky or alternate. The very fact that many laypeople apparently struggle to re-express it indicates that “the plain-vanilla axiom set of continuous real numbers” doesn’t exactly take exclusive hold; and, indeed, why should it? The layperson is never taught any particularly concrete definition of the standard account of real numbers anyway, in my experience; people seem to consider even the most introductory definitions of “real analysis” college level material. It’s no wonder, then, that those who hear about and are made to nominally use real numbers their whole life without ever having been given a definition try to come up with their own accounts of how infinite decimal notation and so on work; accounts which often aren’t entirely coherent or developed (I’m not claiming these are mathematical diamonds in the rough), but which often do at least contain elements of perfectly plausible and even useful ideas which they simply fail to acknowledge aren’t the ones standardly used to define the interpretation of that notation.
Also, it’s worth noting that there are two not-quite-identical elements at play: the number system (real numbers vs. something else) and the notational system (interpreting infinite decimal notation the standard way vs. something else). A person could be perfectly familiar and comfortable with the real numbers and still want to interpret infinite decimal notation in a different way from the standard; ideas about the latter do not necessarily imply anything about the former.
(For that matter, the standard account of the “plain-vanilla real numbers” number system really is a lot messier than many people appreciate (e.g., observe the distinctions in constructive mathematics between the MacNeille reals, the Dedekind reals, the Cauchy reals, the lower reals, the upper reals, and the localic reals), and it is totally reasonable to want to explore other number systems which better capture whatever intuitions it is one finds interesting or useful for whatever applications one is concerned with. Indeed, logicians do this all the time (cf. the surreal numbers, Robinson-style analysis, smooth infinitesimal analysis). Mathematics should not be one-size-fits-all.)
Indistinguishable said:
That is sufficient, and I understand.
Yeah, a “you just wrote that yourself” is not really a good cite.
However, I am able to get something useful out of the wiki cite. Though it doesn’t really address the question asked.
I am familiar with series expansions like Taylor series from calculus. I get the concept. I just am looking for a math site or text that declares “this is the standard meaning for that notation”. Though what I did get from the wiki cite you gave and the wiki on 0.999… is useful for my argumentation purposes.
John W. Kennedy said:
That helps calculating one value, but really doesn’t help with trying to understand or convey what an infinite repeating decimal means. It certainly doesn’t help jump the gap that 0.99(9) = 1.
A little more elaboration on that comment could be useful. You’re talking about limits?
I think there are two important points. The first point is that 0.333… is generally interpreted as the infinite sum
0.3 + 0.03 + 0.003 + …
The second point is that infinite sums are generally interpreted as equal to the limit of the sequence partial sums, in this case the limit of
0.3, 0.33, 0.333, …
This second point is to be found in a lot of places, but I’ll lazily point a a Wikipedia article.
The possibility that someone might not understand that 0.3 with an overbar on the three, or any of the substitutes we’re using here of typographic necessity, denote zero-point-infinite-number-of-threes is, of course, one thing, but I haven’t yet seen anyone who actually has that problem. Similarly, although a sufficiently young child might accept it on authority if told that the number line is quantized, e.g., that there is no fraction smaller than, say, 0.001, any reasonably intelligent adult, or even older child, is capable of reasoning, “But what if you divide 0.001 by 2?” In general, the average schlubb may view real numbers naïvely, but the naïve view is a continuous one.
On the other hand, it is painfuly obvious that there is a problem in getting over the stile at limits, or in seeing the crucial difference between Δy/Δx and dy/dx.
Well, of course most people view the reals “continuously”, but what’s that got to do with anything? It hardly follows that “0.3333…” refers to 1/3 or “0.9999…” refers to 1. That is still a nontrivial matter concerning the notational definition. Like I said, you can understand the reals perfectly well, and still be under the impression that infinite decimal notation works in some other way (e.g., believing that it does not denote a particular real number, but rather some other sort of thing, perhaps an abstraction of the long-term behavior of some sort of process, or perhaps some element of some different “continuous” space of numbers)… You can’t deduce the way the notation works from nothing at all, any more than you could derive Swahili from first principles; you have to be told something about the way the notation works.
Limits just make things unnecessarily confusing, I believe, so far as explaining this notation goes, and I particularly have no idea why you’re dragging derivatives into it, but whatever. All I’m saying is that people need to be told how the notation works, it’s perfectly understandable that they would develop for themselves some misunderstanding of the notation otherwise, and that there is a very easy way to explain the notation which doesn’t require any discussion of limits, epsilons and deltas, or any such thing.
There is a vast difference between not knowing what decimal fractions are, or that some typographical arrangement represents “an infinite number of threes”, which are what keep being suggested as the problem here, despite the fact that anyone who has finished with elementary school should be able to handle them, and despite the fact that no one in this or any other thread seems to have any actual problem with them, and comprehending the mathematical implications of that infinite number of threes, which took mankind millennia to learn to handle (viz., Zeno), and is still resulting in message threads like this, today.
That is one long, difficult-to-parse sentence.
I’m not suggesting that people don’t understand that “0.333…” represents an infinite number of threes. I’m suggesting that people don’t understand what number an infinite number of threes represents, or, indeed, how any infinite decimal sequence is to be interpreted. And this is a nontrivial notational fact which does not immediately follow from understanding how decimal notation works for finite strings. It’s not simply a mathematical consequence of how finite decimal notation works; it involves a choice as to how to extend that notational system to the interpretation of infinite decimal strings as well. Simply saying “Well, there are infinitely many threes” is not enough to draw out any consequences about what this string is to represent, until one has been given an account of how to interpret decimal notation that covers infinite strings.
Try parsing the first half of it, then half of what’s left, then …
And just to head you off at the pass, it’s no good to say “Well, ‘0.3333…’, with an infinite sequence of 3s, must mean the same thing as 3/10 + 3/100 + 3/1000 + 3/10000 + …, with an infinite sequence of terms, and this, as a mathematical fact, comes out to 1/3, so that the whole matter of interpretation follows as an immediate mathematical consequence”. Because the denotation of the sum 3/10 + 3/100 + 3/1000 + … with infinitely many terms is not undeniably 1/3, or even undeniably a real number, except by virtue of our having agreed on a particular standard account of what infinite sums denote; there is the same presence of a convention which must be taught, in taking the denotation of an infinite sum to be defined in such and such a way (perhaps via epsilons and deltas?) extending the familiar case of finite sums. One could give another account of what infinite sums are to refer to; the convention is just that, a convention. A convention with many natural and useful properties, to be sure, but then, many of the alternative possible interpretations have different natural and useful properties as well. As evidenced by the fact that people keep stumbling around trying to express them…