erik150x apparently joined this board just today to awaken a thread that was 12 years old and stonewall everyone who has posted factual information to support the original assertion. Why he feels that this is worth 40 posts in the same thread in the same day I have no idea. And clearly there is no factual argument that will sway him so I for one am saving my energy for arguments of opinion rather than of fact.
(Referring to the 10x=9.999… method of proving that .999… = 1)
Well, maybe mea a little bit culpa. I called it a “trick” several posts up. Actually, that was a bit of a rhetorical trick.
I was making the point that infinitely long decimal numbers have obtain meaningful values by definition, – that is, they are defined as the sum of an infinite series, the very meaning of which exists because we’ve defined a meaning for it. You don’t need any kind of “proof” for that.
To be sure, defining that the sum of an infinite series is the limit of the sequence of partial sums doesn’t tell you anything about what that actual value is. You still have to find a way to compute that. You could call your computation a “proof” if you want – that was the sense I was trying to convey in calling it a “trick.”
No. It’s so infinitely close (literally) to 1 that math has defined it as equal to one for any practical, theoretical, or any purpose whatsoever. Making this assumption allows many other advanced mathematical calculations. If we need to be hyperliteral and say that it is really less than one, we foreclose a bunch of other math for no good reason at all.
As I said earlier, should Lowes, instead of having a pile of 2X4s, have individual SKU items numbers for each cut that are thousands of an inch off just to be technically accurate?
Missed the edit window:
IOW, see that “<” symbol you used? That’s a math symbol. Thus, the science of mathematics get to define what that symbol means and if it decides that < means something more than hyperliterally less, then that’s what it means.
There’s really no more of a trick to it than this. It’s close enough, so math says it’s equal.
This is incorrect. The value of 1/x *approaches *infinity as x *approaches *0. However, division by 0 is undefined by mathematics so 1/0 is a meaningless value.
Incorrect. It is exactly equal to 1. This is an artifact of number theory and the base 10 system. For example, the number 1/3 is expressed in base 10 as 0.333… but in base 3 it is 0.1. It is quite an exact value. If you multiple 0.333… x 3 in base 10 you will get 0.999… which is 10.0 x 0.1 in base 3 which is exactly 1.0[sub]3[/sub]. Not just really, really close, but exactly the same.
No, it’s not because it’s infinitely close, it’s because exactly the same.
The proof shows not they there is no discernible difference. There is no difference even in theory–they are the same.
It’s not arbitrarily defined as the the same because it’s close enough. It is in fact the same as a consequence of number theory.
Refer above comments.
Read my post again.
My original sum went from i = 1 to i = infinity (and I clearly stated summations were one of the few contexts for which infinity was well defined, and I can expound on this, if you’d like). The sum at the end went from i = 1 to i = infinity.
The indices never changed. They are the same indices from before. You are refuting a statement that wasn’t even made in the first place.
Also, to address another one of my mathematical pet peeves:
A number with an infinite decimal expansion is NOT the same as an ‘infinite’ number.
Pi has an infinite decimal expansion. Pi is not “infinite”. It has a value. It happens to be greater than 3 and less than 4. That’s hardly “infinite”. Just because a number has a value that cannot be expressed finitely using a decimal expansion does not mean it does not have a definite and finite value.
I see, and I believe that you have convinced me. 1/3 X 3=1. No question. Just because the only way we can express 1/3 is base 10 numerals is .333333… does not mean that it is slightly (even infinitely slightly) less than 1/3. It is in fact 1/3.
Is that the long and the short of it?
Yes. That’s it exactly.
The rest of this started out as a double post, but I guess I took a bit too long to write it out:
This is mostly but not precisely correct.
-
1/x has no well defined limit. If we define a “right-sided” limit, this statement is true. But as x approaches 0 from the left hand (negative) side, the value of 1/x does not become unbounded in the same way as it does from the right hand (positive) side. You get +inf vs -inf, in other words.
-
We don’t need to even use the word “infinity” for this limit. It’s an incredibly useful shortcut, but it’s not strictly necessary. We can simply say the limit becomes unbounded.
In a more formal way, say we have a sequence ‘s’, so that the terms are s_i, with i a positive integer. If for every M > 0, there exists an integer I such that for i > I we have that all s_i > M, then we say the limit of the sequence exists and is unbounded.
To go through that definition thoroughly is the subject of at least a half hour lecture, and I’d want to refine it a bit if I ever presented it to a class, but it gets at least 90% of the way there.
If we want to use the word “infinity” as in the extended real number system with +inf and -inf, we say that the limit of s_i above is +infinity, if this is the case. Likewise, we can make a similar limit definition for -inf.
There are so many connotations associated with the word “infinity” itself that we run into problems when our “common sense” notion of infinity runs up against what is actually defined.
We don’t really need to use the word “infinity” itself. Make up a different word. Say “bignum”. We define “bignum” such that “bignum” > r, if r is a standard real number. And “-bignum” < r if r is a standard real number. Of course, “bignum” is my stand-in for “infinity” but with less of the normal baggage.
I’m very curious about this myself. We see this behavior regularly here, especially on topics about physics and math. There’s no disgrace in admitting that you don’t understand relativity or QM or, in this case, infinities. They’re extremely tricky, they’re totally counter-intuitive or contrary to “common sense,” and it took the best people in the professions many years to work out the details. If you’re not in the field and haven’t given it a ton of study you’ll never come up with the right answer by just thinking about it. You have to do the heavy math. Yet we seldom get people coming in and saying that they don’t understand a point and could someone please give them an explanation they can try to understand. Much more often, we get posters like erik, who insists that every mathematician in the entire world is wrong. And, like erik, they lack even the most basic understanding of what they’re saying and why their arguments are immediately dismissible.
So why take the attitude that not only are all the true professionals wrong, everybody in the thread explaining the correct answer in detail are also wrong? That just alienates everybody volunteering to help and it drives experts like Indistinguishable out of the thread. Is this simply a mode of learning? Are teachers familiar with this? I’d think it would be incredibly frustrating if many students took the attitude that all the textbooks are wrong until the teacher can somehow prove them right. I don’t see how anyone could successfully teach at all under such conditions. However, we see it so often that it can’t be simply an individual aberration. It baffles me.
And technically it is not a formal proof either. It’s more of a demonstration of a proof, but it has the same status as saying that if 9x = 9, then x = 1. Which is hardly a trick. Arguing as erik does that you can’t know the answer given by multiplying 10 x .99999~ is ignoring that the rules of multiplying infinities have been in place for 150 years or so since Cantor worked them out in exquisite detail. If mathematicians don’t know how to work a simple problem like that then why would he take their word for any other bit of math whatsoever? Again, it’s a baffling argument.
[quote=“Great_Antibob, post:95, topic:27517”]
We’re not just “accepting” that there is a 9 there.
Here’s an example of one of the “contexts” where infinity is actually defined.
Write 0.9999… in a different form:
sum(i = 1 to “infinity”) [9/10^i]
Note that the “infinity” in the index of the summation just tells us not to stop adding more terms ever.
Now, multiply this by 10:
10*sum(i = 1 to “infinity”) [9/10^i]
We can bring the 10 “inside” the sum:
sum(i = 1 to "infinity") [10*9/10^i]
Now simplify:
sum(i = 1 to "infinity") [9/10^(i-1)]
Written in a more ‘standard’ form, this is 9.99999…
There is no ‘0’ at the end at all. Nor are we “adding” any digits at all. We’re just multiplying by 10.
You start out with… [9/10^i]
end with [9/10^(i-1)]
No you never changed (i = 1 to infinity), but it is implied by the above changes and though I can’t prove what’s wrong here, I can’t help but feel a subtle trick is at play here which removes the [9/10^(infinity)] originally present.
I do realize (infinity -1 = infinity), so your argument is still valid, and I do admit you have me at a loss to explain what’s wrong. It is a wholly unsatisfying argument to me, but i guess that’s my problem.
To the many people who have taken my debate here seriously. I thank you.
To the many people on here, who think I have made some important ingenious insight that no one else ever thought of… you misunderstand me. This question comes up over and over because it bothers many many people, including some of the great mathematicians in history. I am certainly not and never will be close to that. However I understand a good deal more than you give me credit for. The people who have taken the time to address my concerns about .999… = 1 in a serious manner understand my plight I think as they have personally probably struggled with at one point or another.
I have not to this point really seen something that makes me say, oh yes, of course .999… = 1. Though Great Antibob’s demonstration of 10 x .999… being equal to 9.999… gives me pause for thought. I don’t really like the fact that we have to bring (infinity - 1) into it, but I can’t say its wrong either.
Many mathematicians have used and still do use infinitesimals (albeit not in the real number system). The idea of 1/infinity is not a ridiculous notion or certain madness as some would say here. Many proofs make use of a limit which is in the end an unproven assertion. It may provide nearly infinite accuracy, but when your talking about the difference between 1 and .999… being almost infinitely accurate hardly seems just? The branch of mathematics have defined the real number system (great minds over a great length of time) to a finely tuned but not infallible system. If you understand Godel’s Theorem of Incompleteness, then you know no system is infallible. I did not come on here to try and change the mathematical world, simply to find someone who could convince me that .999… = 1. Or perhaps to find that we as conventions say it is so, but ultimately there is no proof. Perhaps as some have suggested I need to find a number system that suites my taste better.
Thanks once again to all who took the time to ‘seriously’ discuss this.
I haven’t read every post in this thread, but most of them.
I want to say that erik isn’t necessarily wrong. He doesn’t strike me as your average high schooler who just can’t swallow what his math teacher told him about .999… equalling 1. He’s apparently thought a lot about this, and has gotten to the heart of mathematics.
He’s right that there is no proof. He’s right that it all boils down to limits. Here’s the deal: the part of mathematics we’re talking about here doesn’t have anything to do with proofs. It’s about definitions. The definitions are inspired by our intuition about numbers and measurement. Calculus was simultaneously the shining pinnacle of math and it’s biggest black eye for 300 years until a rigorous definition of limits was settled on in the mid-19th century. The definition of limits was inspired by our “wishy-washy” notions about calculus throughout that time period. Afterwards, calculus was made rigorous and became analysis. Make no mistake, 0.999… DOES equal 1 under conventional real analysis. And under the same system, neither “infinity” nor “1/infinity” is a number.
These definitions make proofs possible. You can’t prove “this equals that” without rigorously defining “this” and “that” (and “equals” for that matter). As with any bit of logic, you’re welcome to dispute the underlying definitions and dismiss the conclusions accordingly.
Here’s where you’re wrong, erik: You demand a proof of something there is no proof of. You’re demanding a proof of limits where there is only a definition. Accept it or not. IF you accept the definition of limits (and the definition of decimal numbers based on limits), then 0.9999… = 1. It can be proven and has been in this very thread.
If you don’t accept it, that’s your call. But do us a favor, okay? DEFINE 1/infinity. Work out the concequences of that definition along with other mathematical definitions that you DO accept. Are the consequences consistent? What exactly DOES 0.999… mean in your system of math? And ultimately, are the consequences of your definition interesting? Are they useful? If so, congrats, you’re doing real math, the kind mathemeticians do, not engineers or scientist or businessmen.
But right now, it seems like you’re saying “I don’t accept your definitions so I’ll argue over it”. That doesn’t work. People who don’t agree on the same premises can’t argue logically with each other, only emotionally. So what you need to do is put your money where your mouth is and give us the premises you DO accept. Honestly, if you don’t accept the definition of limits and decimal notation of numbers in this case, there’s not anywhere else to go. So if you want this thread to go anywhere, put up your definitions so we can argue with them. Otherwise, this thread is just a one-sided “nanny nanny boo boo, I don’t believe you” type of thing.
Just please, please stop demanding proofs of definitions. It doesn’t make sense.
Thanks, Dr. Cube for a very satisfying response.
My bad was indeed not comprehending that .999… = 1 is built into the works and thus a given instead of something proven.
I will definitely get back to you all when I finish my incompleteness theorem on limits and the decimal notation system, It might be a while though, so be patient.
I also hope everyone heeds your advise similarly and stops trying to provide proofs for definitions, which only serve to confuse poor lost math souls like my self.
I should mention that you have your work cut out for you. A system of math WAS developed that defines and uses “infinity” and “1/infinity”, called hyperreals, or “non-standard analysis”.
In that system 0.999… STILL equals 1, so you’re going to have to come up with something new. Basically, you’re going to have to redefine what decimal numbers mean and hope what you come up with is constent.
To be fair, the definitions are about limits and decimal numbers. IF you accept the definition of the decimal representation of real numbers, which is based on limits, THEN you can prove that 0.999… = 1. Which is what everybody in this thread was trying to do.
What you were saying is “prove it without limits” which is basically equivalent to “redefine decimal numbers”. If you want to come up with a new definition of decimals, that’s your job, not ours.
Duly noted. I was not aware that currently the very definition of decimals depends on the definition of a limit.
I am wondering at the time of writing this was that a choice of necessity or convenience? If a necessity, why? For example what I would like to say… and NOT doubting its true… but would like to see an example of how not defining repeating decimals in this way leads to failure or contradiction. Honestly not trying to argue the point, just would like to see a demonstration of why it is better for all if we define .999.. or any other .xxx… as the limit of a geometric series? I will certainly look into it myself, but if there is something quick that makes this point obvious here, that would be nice to see.
[quote=“erik150x, post:60, topic:27517”]
To put this more succinctly…
Saying that the sum geometric series .999… converges to 1, is to say that there is a limit for that sum which is 1. Saying that there is a limit for that sum is saying that it can be proven that .999… is as close to 1 as I wish to prove. Yet you cannot prove it IS one. Right?
Well, heck, I could have told you .999… is a close to 1 as you can get without any calculus. Well that’s not counting the new number I came up with, described as:
(1 - .999…)/2 + 1 - .999… …
which is halfway between .999… and 1.[ital. Leo]
This rings a bell in my layman head to the matter of point fields in modern physics. If this isn’t a big hijack, could someone give a quick comment?
There’s no physics involved. “Point” just means “number” here. This is because we’re fundamentally dealing with the geometry of the real number line, and the arithmetic properties of the numbers are secondary. In working in geometric situations, the word “point” is much more commonly used.
Physics is a bit different than pure mathematics, in physics there is a quantifiable “minimum interval” for most things, usually called the “Planck <x>”. So the planck length is the minimum possible distance between two points, a limitation the physical universe has but math doesn’t share.
Okay, okay, that’s theoretical. But the idea that spacetime is ultimately discrete at the smallest level is at least one popular interpretation of the notion of the Planck length (another is that there is space smaller, but it’s basically impossible to tell otherwise, even with correct instruments). There’s also a bunch of caveats I don’t understand such as “if large extra dimensions exist[…] the planck length has no fundamental physical significance” (Wikipedia), but either way, there may or may not be a fundamental difference between a literal physical point and the continuity of the real numbers depending on whether the universe is discretized at the level of the Planck Length.
So, we’ve all been trying to beat into your skull the importance of NOT relying on intuition, just as our math teachers beat it into our skulls.
Here’s a more practical approach: Intuition can sometimes work well for giving us some prospective starting points for some line of mathematical development. For example, in the days of Newton and Leibniz, they developed intuitive notions of limit, continuity, and differentiability, and then went on to develop the entire Calculus on top of that. And throughout the whole body of Calculus, they developed formulas that seemed to work. That is, where the same things could be computed by older simpler formulas, the new-fangled formulas always gave the same answers. Then, they used the same techniques to develop formulas for things that couldn’t have been computed before, like areas under strangely shaped curves. But then how would you ever know if the formulas were giving you the right answers? Well, you could chop up the area into little squares to get an approximate answer, and note that the formulas always came real close to that. But dammit, mathematicians wanted to PROVE that their formulas worked, and for centuries they COULDN’T!
They needed precise definitions just to know what they were working with. Without that, there could be no tools for proving things.
The general pattern went like this:
IF you have [certain conditions], THEN you know (or can prove) that you have [certain other conditions] along with it.
So you had to know exactly what certain conditions you have to begin with.
So, they had an intuitive idea of limits and continuity. But they didn’t know exactly how to describe it. That is, they didn’t know exactly what condition they had, in a way that they could use to develop proofs of anything. What exactly did they need to prove? Like Justice Potter Stewart’s observation that he couldn’t define pornography, but he knew it when he saw it. (And look how debatable THAT has always been!)
Finally, someone came up with a precise definition of a limit that seemed to cover everything that everybody always intuitively “knew”. And surprise, surprise: It was arcane! It was the epsilon-delta definition. It took a while to wrap one’s mind around – but it clearly expresses all the conditions that everyone seemed to mean (or wanted to mean) when they talked about limits. From this, a precise definition of continuity was built. And a precise definition of differentiability. By stating precisely what conditions you have when you have a limit, you then have some facts that you can build proofs with.
Note that once you start doing that, it can cease to be intuitive. The e-d definition wasn’t intuitive, and took a few centuries to come up with. The definition of continuity was likewise counter-intuitive: Whoda thunk that you would first define continuity at a single point, and then over an interval?
Okay, that’s why you need definitions. Not just any old definitions (as you may have been taught). Definitions that you can actually do useful work with. And that’s also why you can’t just give glib definitions to things like 1/0 or 1/infinity. Hey, anything divided by itself = 1, right? So let’s just define 0/0 = 1 so then it will act like any other n/n, and a whole lot of problems go away!
I’ll do a separate (maybe shorter) post on how these ideas apply to infinitely long decimal fractions.
ETA: Oh, and by the way: So how well DID those integration formulas for curvy areas work out after all? Well, it was hard to say. Turned out, nobody really had a definition for “area”, so the integrations formulas didn’t really have any “right” or “wrong”. But they seemed to give intuitively correct (or at least close) answers. So, mathematicians did the mathematical thing: They DEFINED the area to be whatever answer those formulas gave! Suddenly, as if by magic, all those integration formulas were exactly unfalsefiably precisely right!
Okay, so now: What do you gain by defining decimal fractions as the limit of a sum of terms, that you couldn’t have done before?
Well, as we’ve discussed already, an infinite decimal (and let’s just be clear: by infinite decimal, we mean one with infinitely many digits, not a fraction of infinite value. Okay?) doesn’t have any meaning that you can get at in the “usual” way. A finitely-long fraction is defined as the sum of a specific sequence of terms. We discussed that already.
It seemed to make good sense, intuitively and even empirically, to think of an infinite decimal as the sum of infinitely many terms. But (again, as mentioned above), it’s not so easy.
And again, as mentioned, adding up an infinite series is NOT like ordinary addition. It doesn’t work. It needs to be defined, in some way that gives a satisfying result. Here’s your first clue that there’s a problem: Addition of infinitely many terms is not necessarily commutative! :eek:
It’s true! The most obvious examples (and the only ones I can remember all these years later) comes up with alternating series – where the terms are alternately positive and negative. Suppose you try to add up all the positive terms into one sum, and all the negative terms into a separate sum, and then add those. It might not work! Or if you just wrote the series with all its positive terms first, followed by all its negative terms. Wait a minute! If you wrote all the positive terms first, there are infinitely many of those, and you’d never even get to the negative terms! Oops.
Clearly, we need to have some definition of what the sum of an infinite series is! A definition that gives us the “right” answer (in cases where we can independently determine the right answer), and that seems to agree with our intuition of what it means to add up a bunch of number, and so on. (But it turned out, we had to give up on keeping it commutative.)
That’s where the limit of a sequence of partial sums came in. We don’t know how to add infinitely many terms (we don’t even know what that means, yet), but we can add up finitely many terms. So create that sequence of partial sums (as discussed already) and see where it goes. If the sequence approaches a limit, then define the sum to be that limit. That’s how it’s done. ETA: And, this process makes fairly clear, I think, that if you re-arrange the terms, thus changing the sequence of partial sums, all bets are off about what the limit might be, if any. The order of the terms does matter!
This gives a definition that we can actually work with. That’s what you’re missing if you don’t have that definition. If you have an infinite series that converges, then you have a limit. And we know by now what a limit is and how to work with it. Bingo! That gives us the meaning and the tools to work with series that we didn’t have before. We have tools for computing (or “proving”) the value of such a series. And when we apply those tools, one of the results is … wait for it! … .99999… = 1