Thanks for wording that better. I was struggling to find a way to explain it.
Like I said, it was cited to me.
Thanks for wording that better. I was struggling to find a way to explain it.
Like I said, it was cited to me.
The epsilon notation is of some use in sneaking up on certain problems from behind; it’s the same basic idea as the transition from Δx to dx in elementary calculus. And it turns up in other areas. For example, “epsilon” designates the smallest possible difference between one value and another in the floating-point computer arithmetic on such-and-such a computer. (On a computer, practically everything that has been said in this thread is false, because a computer can’t carry out something to an infinite number of decimals.)
Usually they don’t. But computers can operate on many values with an infinite number of decimals by maintaining the definition of rational numbers as a/b, and by using, you guessed it, infinite notation. The most popular methods of representing numbers with computers (IEE standards, BCD, and binary integers) don’t bother with this, instead limiting the number of decimals maintained, and often using rounding algorithms. The only software I’ve seen that tried to represent infinite decimals were research and educational tools. I’ve never used them, so I don’t know what limitations they may have had.
You can handle rational numbers, yes, but rational numbers are common fractions and common fractions aren’t decimal fractions, which abandons the subject of this discussion altogether.
Hell, in algebra-based languages like FORMAC, you can even handle irrationals in a finite space, just as I can do here: π. But mainstream computing hasn’t gone that way. (How many people here even know the name “FORMAC”? How many have used it?)
Yes, computers are a different ball of wax than pure math. For instance, I ask the question
does 1 = 1.00 ? 1 = 1.0~ ?
In pure math, those are equal. In computer software, the placeholder count may affect how it is treated. In applied mathematics, the placeholder count is a measure of exactitude.
But all of this debate is about pure math, not software or applied math.
I had this conversation with some folks at work, them not wanting to accept 0.9~ = 1.
I pulled out the “Let x = 0.9~, then 10x = 9.9~” etc argument. One guy refused to accept that one because I “assigned the value to X, so it isn’t proper to then solve for x”.
He also didn’t like decimal notation for infinite fractions, saying “0.33~ is an approximation”. I tried to argue with him that that notation is just as exact as 1/3, it’s just that at any point you try to make a calculation entering digits by hand, you are immediately truncating and thus making it an approximation. He wouldn’t accept my explanation.
I showed him the wiki on 0.999.
He finally came back with a proof that he accepts. It uses fractional notation rather than decimal notation.
I’ve slightly tweaked the presentation (fractions are harder to write here), but that is what he presented. Nicely done, but one has to accept the “given” that defines the value of the summation. (I’m sure that’s shown elsewhere, but it’s assumed here.)
yeah, the way computers usually do things doesn’t help with this problem. i’m not a mathemetician, but simple logic and rudimentary knowledge of pure math resolves this:
1/3 = 0.3~
3 * 1/3 = 1
use elementary arithmetic on 3*1/3 and you get 0.9~
so 0.9~ must = 1
but who’s fault is the variation? is it the notation, or is it the elementary arithmetic?
And most of them are full of shit. 
True. But since we’re talking about .999~, it’s kind of a given that we’re talking about pure math.
If 1/3 is not equal to .333… then what is it equal to?
ok, now i don’t remember when i learned this, but 0.3~ is a representation of 1/3. i think a lot people think that the infinite repeat is an artifact of long divison, and that 0.3~ is an approximation because the exact value can’t be resolved. but if you realize that the repeating decimal is actually infinite, then 0.3~ is exactly 1/3. the part i’d never even considered before is 0.9~=1. assuming that notation is normalized, i.e., 1 is represented as 1.0~, then we have two ways of representing 1. i’m not a mathematician, so i’ve never considered this, but apparently the notation does not have a single representation for each number. so, all of you pure mathematicians, what are the repercussions of this? obviously there are at least two notations that could represent any integer. what about 1/3? is there another equivalent notation for 0.3~? is there a rule for normalizing notation that eliminates this, or is that just the way it is? (obviously 1/3 is another way to represent 0.3~, but that is based on two numbers and the use of an operator, ditto for 1E0=1, i’m talking about the notation to specify one number without the use of an operator). please educate me.
DanBlather said:
If this is directed at me (or the guy who cited it to me), the point was not with 1/3, the point was with the decimal representation .333… He felt that the decimal representation was an approximation. Said one of his instructors ingrained that in him.
I should have drug out the specifics of which instructor. My take is that you have to be careful anytime you use the decimal notation because manually entering it into a calculator will likely cause a truncation - you suddenly will not have an infinite repetition. Ergo, at that point the value becomes an approximation. But that is not the fault of the … notation (or the bar notation or the tilda notation or the parenthesis notation - all forms for “this repeats forever”), it is the fault of limited places in calculators. Or heaven forbid working longhand.
ed malin said:
In base 3, wouldn’t that be 0.1?
And then there’s 1÷3, and 2/6 and 0.0101010101010101… in binary and 0.2525252525… in octal and 0.4 in duodecimal and 0.555555… in hexadecimal and 3[sup]-1[/sup] and an infinite number of other possibilities.
Basically, none.
It’s an artifact of the base 10 system. Note that 1.9999~ = 2 = 2.0 = 2.000~. There would be equivalents in all other integer bases. (Well, maybe not -1, 0, and 1. And we don’t want to get into transcendental bases. :D)
ok, thank you. so does this artifact apply only to integers? is .19~ the same as .2? i’m trying to get hold of an explanation that fits in my limited knowledge of pure math.
for the above question, and prior questions, please consider i am asking about normalized representations of numbers in base ten (or some other specific base). so any number expressed in a finite number of digits would be normalized, e.g., 1 -> 1.0~. obviously numbers can be expressed as the result of an operation like 1/3.
thank you for that link… so much so thattimecube needs another go. Also… simply put, 3/3 of .999 is not 3/3 of 1. Duh.
And my god what a long amount of time and effort put on this thread.
No, it doesn’t apply only to integers. Your example .19~ = .2 does hold. In fact, any decimal which terminates can be rewritten in this fashion to end in a string of 9’s.
thanks for all the info guys! wikipedia (surprisingly) has a good article on this subject, that references back to the SD. the section on infinite series seems to be the explanation i heard ever so long ago. and the section on the impossibility of unique representation was what interested me the most because it addresses the lexicographic issues.
as for exapno’s quote:
Your argument is wrong from start to finish. I’d like to say that about a lot of things people say here, but refrain because political or economic arguments are seldom totally wrong. This is math and the rules are different. So it’s refreshing to have a case where I can say that at the top of my voice and not fear any contradiction.
i actually understand this feeling quite well. i suppose its a matter of choosing your poison, but there are technical subjects where i get tired of dealing with people who can’t understand that there is a provable, definitive answer.