Perhaps The Master is bored and decided to throw this particular hornet’s nest into the fray for entertainment value.
Has any person that wasn’t immediately convinced by the usual arguments been convinced eventually?
… the tortoise ?
The first repetition of the solution convinces them .9 of the way. The next repetition gets them .99 of the way. If we keep this up long enough …
But this will only be effective if each repetition of the argument takes half the time of the previous one.
This is getting tensor and tensor …
Not that I can recall. But I’ve seen bystanders come in from the sidelines and go “I really didn’t get it until I saw that explanation.” Also it’s a fun exercise, at least for a teacher, to see if one can get at the root of miscomprehension.
Again: So what?
In fact your constant speed process does have a last point 1. But we are dealing with a process of infinitely many steps when we are dealing with 0.999…, infinitely many digits, no last digit, no last step, no largest number of nines. Yes, there are infinitely many 9s (0.9, 0.99, 0.999, …) in [0, 1], that’s correct. What you don’t want to see is, that these infinitely many 9s (0.9, 0.99, 0.999, …) are in [0, 1) either. So, point 1 isn’t needed at all. Look up set theory.
There is no largest number of nines. We are passing infinitely many digits without reaching a last point. There is no last step after an infinite number of steps!
The problem under discussion is NOT that we’ve got a residual distance after each finite number of steps.
t = 0: I move my pen from point 0 to point 0.9 of the number line
t = 0.9: I move my pen from point 0.9 to point 0.99 of the number line
t = 0.99: I move my pen from point 0.99 to point 0.999 of the number line
…
Even at t = 1, which means after an infinite number of steps, we haven’t reached point 1. A residual distance is not defined.
Then what point have we reached?
It can’t be 0.999 because we reached that at t=0.99.
It can’t be 0.9999 because we reached that at t=0.999.
It can’t be 0.99999 because we reached that at t=0.9999.
It can’t be 0.99999999999999999999999999999999 because we reached that at t=0.9999999999999999999999999999999.
I can keep adding 9s arbitrarily. It doesn’t matter how small the difference between 0.9999999999999999999999999999999999999 and 1 is, I can find the step where we went smaller. The only conclusion from this is that there is no distance between 0.99… and 1. If the “residual distance” between 0.99… and 1 is “not defined”, how can you even say it exists in mathematics?
Good, there are infinitely many nines in [0, 1], we’re on the same page here.
What do you mean “either”, do you mean also? A number is in the interval if if it is less than 1 by some non-zero ε. What is the difference between 0.999… and 1? Actual mathematicians say that the difference is 0, and that 0.999… is 1. You need to either come up with a definition for what that difference is, or abandon your infatuation with 0.999… as a process as unmathematical.
If you insist on keeping 0.999… in [0, 1> it will have the absurd distinction of being the only number in the interval that doesn’t have a definable difference from 1.
netzweltler seems to have latched onto the fact that there is no largest value in the open interval (0,1), which excludes 1 by defintion but includes every element of {0.9, 0.99, 0.999,… }.
Well if 0.999… = 1 it doesn’t include 0.999…
We have looked up set theory, and we find your conclusions wanting still … have you looked up limits yet?
I think we’re making progress here, now we need to clarify exactly what value this represents … Also, we need to find a value that exists between (0.999…) and 1, as to demonstrate that we’re complying with the law of math that says between any two real number there exists yet another real number …
=========
I’m going to go out on a limb here, so I’m not saying I’m right, but something to consider: I think we can agree with netzweltler that this particular proof of (0.999…) = 1 isn’t as robust as maybe it should be … the mistake is that netzweltler is then saying this then acts as a counter-proof, but fails to acknowledge that as counter-proof it suffers the same flaw, it’s not as robust as it should be.
There are other proofs that (0.999…) = 1 … some of which are quite rigid … your mission … netzweltler … if you choose to accept it … is to show that these other proofs also give you the same results.
Seems to me that the problem isn’t that
1 is not an element of [0 , 1) (which is certainly true)
but that
0.999… is not an element of {0.9, 0.99, 0.999, … } (being greater that every single element in that set)
There’s nothing wrong with any of the formulations proposed by the pro side.
0.9999999~ doesn’t equal 1 for the same reason that 0.9999999~ doesn’t equal 0.999999~8
It’s not like time is elapsing as the additional digits get appended to the right, and hence that it is “getting closer to” 1 “as you go”. There is no “go”, and no “getting closer”. It is what it is.
I couldn’t follow your “argument”, but given that 0.999… most definitely equals 1, there must be some error in it.
Most notably that “0.999999~8” is not a thing.
I don’t want to run the risk of being accused of being unhelpful again, but I think we can define 0.999…8 in a coherent fashion (it’s the limit of the sequence 0.98, 0.998, 0.9998, …) – it is equal to 0.999… which in turn equals 1.
This last should give AHunter3 some idea where they went wrong.
If you could please say what that reason is, if you don’t mind … are you saying that 16/32 doesn’t equal 1/2 for the same reasons that 7/16 doesn’t equal 8/16? … and you’re second paragraph is not fully complete … “getting closer to” 1 needs to be defined in relation to something else “getting closer to” something … “as you go” is too indistinct, as we go to the store, as we go to the moon, as we go where?