But do you admit the truth of this matter is we just simply accpet by defnition of limits that they are the same? And further more Limits offer us no actual proof in this matter?
Offering proofs to people who ask this quesiton is really just not understanding the issue or trying to avoid the simple statement they are the same because we say they are and works out nicely for us.
Maybe for some people it is intuitively obvious that they are the same number? I think for many of is this is not the case, and I think it’s a reasonable objection.
I frankly am appalled that we actually work this into the definition of repeating decimals, because I do not believe Limits offer us some ultimate truth here and I don’t see why we shoudl force students to accpet this either. We can say we treat them as equal for the vast majority of mathematics, but to the actual quesiton of whether they are equal at least it is unkown.
Oh, well why didn’t you just SAY that in the first place?
You really compared .999… to .999… out to the last digit and found they are equal? You could have saved yourself some work. We could have told you that right from the start. And here we are, trying to convince you that .999… = 1 instead. Sheesh.
True. Hey, when this is done, let’s talk about Relativity!
Well, actually, I learned about the 10x=9.999… computation in 7th grade math, YEARS before I ever heard of limits. So I knew early on that .999… = 1 – Yet, once I learned about limits and sequences and such, I knew it even better!
Really, erik150x, there’s a certain order for assuming, defining, or proving things in math. Nobody defined limits this way, or any way, just to make .999… = 1. Rather, limits are defined early in Calculus, in ways that seem useful and consistent with our intuitive notions of how limits ought to work. Then we use limits extensively in Calculus to do all kinds of useful stuff. Really, we get GREAT mileage out of limits!
And we define infinite series, and the methods of summing them (the limit definition of the sum of an infinite series) similarly, because it works the way we think it should work as it is in accordance with various cases that we were able to figure out in more basic ways – and because we need sums of series and don’t otherwise have a way to do them anyway – and again, we use that definition for all sorts of stuff. Again, GREAT mileage out of that one too.
And then, as a particular useful usage, we define infinite decimal fractions as a sum of an infinite series (for lack of any other meaningful way to do it, or maybe it’s just one of various good ways we could have done it).
And we do ALL of the above WITHOUT particularly shooting for the goal of just making .999… = 1 (Which we sort of already knew anyway because we did that 10x = 9.999… thing in 7th grade, remember?)
And THEN we work out the value and meaning of .999… using our shiny new mathematical tools, and LO AND BEHOLD – it turns out to be… wait for it! … ONE! Well, surprise, surprise. We kind of thought we knew that all along! So we have some independent confirmation that there is sanity and consistency in at least one corner of the mathematical universe. And we (well, all the rest of us anyway) are comforted to see yet one more case where limits and infinite decimals and things work as we expected them to! {:Breathes sigh of relief:}
But note again, erik150x, we didn’t just outright directly define that .999… = 1 , AND we didn’t go to all that trouble defining and developing limits and infinite series just for the purpose of making .999… = 1 – We did all that work well before that, for all that great mileage we could get out of it, and .999… = 1 just sort of came along with the package deal.
It’s all just logic, dude! You choose your premises as best you can, preferrably not entirely arbitrarily, but because they make sense, and then follow the logic where it leads you.
Logic cannot tell you what is true! Logic can only tell you what else is true!
(And pardon me if I’m overlooking any additional posts that came in while I was busy typing all that!)
In other words, people who ask this quesiton .999… = 1? Are not asking can you show with limits how we define this to be one in the same number? Most of them I can assure you are not asking this. They want to know form the purley intuitve knowledge that we all posess that .9 < 1 and .99 < 1 and .999 < 1 and .9999 < 1 and so on… how is it that at some point onfirther exapanison (even infinite) they become equal?
Well, see? We CAN’T add up infinitely long decimal fractions in the “usual” way, for exactly this reason. That’s why we need to find, or define, some other way. That’s where we come up with the business about infinite series and using limits to find the sum of them.
Wait a minute. Which post, by whom, are you responding to by asking these questions now? I can’t see that you’re asking anything here that you haven’t already asked, and I can’t see that there are any answers to be given different from the answers already given.
Are we ready to declare that the conversation is just going in circles, coming to the same spot over and over, like a hiker lost in the forest? What new and different direction is to be taken from here?
Are we ready to declare this conversation a stalemate?
The only kind of proof in mathematics is a proof of something which is true by definition (either directly or indirectly; probably indirectly, if anyone would bother with a proof, but then, it’s all a matter of how you look at it). The only way something can be mathematically true is because it’s been made true by definition. This is much the case for “2 + 2 is equal to 2 * 2” or “5 is not equal to 0” as it is for “0.999… is equal to 1” or “0.999… is not equal to 1”.
So limits can’t force you to accept that 0.999… = 1 if you don’t accept the definitions that lead there. But they can clarify the nature of the definitions which do lead there. And for some people, the question “Why does 0.999… = 1?” is well answered by treating this as “What definitions are you using that lead to this result?”. For other people, their goals may be slightly different.
One might well say that. It certainly is true that “they [0.999… and 1] are the same because we say they are and [that] works out nicely for us”.
But one could also say that what the proffered “proofs” do is show the reasons for having adopted the particular definitions we did; they justify the claim that these definitions “work out nicely for us”. In that way, they are performing a useful function.
There is no ultimate truth in mathematics. There is only truth relative to definitions. The status of the proposition “0.999… = 1” is no different from the status of “5 = 0” or “Kings can jump diagonally”. There’s not some ultimate truth status out there floating in the Platonic either for us to discover. There’s just the fact that these are all true on some interpretations (e.g., the standard interpretation into real numbers, arithmetic modulo 5, and checkers, respectively), and not on some other other interpretations (e.g., the one I’ve outlined into hyperrationals, integer arithmetic, and chess).
And that’s all there is to it. There’s not some great mystery. There’s no mystery, any more than there’s some mystery as to whether the rules of checkers or the rules of chess are the “real” rules. There’s just a choice. We can choose what game to play at any moment. We can even play multiple language games and talk about their relationships to each other. That’s it. That’s everything. That’s what mathematics is like.
erik150x somehow I get the feeling you feel that 0.99… apples never are equal to one whole apple. This of course is true. That’s why 0.999… isn’t a natural number.
However if you want to discuss 0.999… you should know it is a a part of the mathimatical construct known as ‘real’ numbers (named so because they aren’t). In nature you wil never count to 0.999… It is a construct to come up with a answer for things like 1-1/infinity and to acommodate numbers like pi and 0.333…
Now it starts to get tricky. If we want to use stuff like “infinity” we have to agree to some rules.
This is what makes math the usefull tool it is. We can make models to descibe reality and mathematics provides the toolbox to work with those models. This is very usefull, it allowed to put a man on the moon and we can make iPhones.
However if you think maths descibes reality or is a philosophical tool you are in trouble.
In math 0.99… equals 1
Outside of math 0.999… Means your 9 key is stuck.
Forcing students to accpet this stuff about Limits turns out to be useful enough even though we only fall within some reasonable epsilon of a 99.999999…% success rate at forcing students to accpet this. One useful result is a fairly low rate, within some reasonable epsilon of 0.00000000…1%, of threads like this one.
You take some of my staements a little too literal. Which makes me think you are not as smart as I thought you were, “dude”.
Of course the whole purpose of imits was not to define .999… = 1. Please, dude, you insult me.
The purpose was to get rid of the issue that arrise when we ask quesitons like what is 1 - .999… = ?
The most logical response is 1/infinity, not 0.
But 1/infinity is a beast of thing, so much we avoid it all together. There are some who would prefer it didn’t exist at all. Some would like to say that it just doesn’t exist. In any case Limits were design to avoid these issues. I suppose some have come to think of Limits as some fundimental truth. I don’t and I am sure many others don’t.
It is not as you say… just logic at all. I would like to know on what basis you would consider .999… = 1 with the definition of .999… being:
9/10 + 9/100 + 9/1000 + …
that is defined as the infinite series, not the Limit of the series?
The limit theorm just says because we can show they are as close as we want (but we can’t show they are equal) we will just say that are. I don’t see what’s logical about that at all. And again, I am using the .999… thing here as an example, it was not I am sure the pressing issue of the day that pushed Limits into the forfront. Limits are great, yeah yeah yeah, but they say NOTHING about how exaclty .999… actual EQUALS 1. If you don’t get that then you will just have to keep thinking about it.
I mentioned in three (3) separate posts that 0.999…=1 was a direct consequence of the Axiom of Archimedes – a simple common-sense proposition (which Archimedes attributes to his predecessor Eudoxus) which requires no mention whatsoever of words like “definition”, “limit”, “infinity”, or “infinitesimal.”
Why no comment on that? Is it just because Archimedes’ viewpoint doesn’t fit with your preconceived answer?
Every real number can be represented by an infinite decimal expansion.* For any two real numbers there is one in-between. If 1 and .999~ are real numbers, then either there is some real number that can be represented by an infinite decimal expansion which is between 1 and 0.999~; or 1 = .999~.
We can have any finite number of digits on the left hand side of the decimal, and denumerably many digits on the right had side.
Limits DO NOT ACTUALLY allow you to do this infinite addition, it allows you to say if as torwards infinity it gets closer and closer to some number to just make that leap and say it is that number.
You could take the view if you wish that Limits give you permission (without an justification other it would be nice if we could just say this, and wouldn’t it be nice if we could prove this, althought we can’t) to say .999… = 1.
Or you can take the view that Limits simple avoid the for the most part tivial in every day matters issues of whether or not .999… = 1 or not.
To me the second is more logical. I would like to understand why it is Senegoid logical to just accpet Limits as a fact rather than a tool?
I mean I like having answers too, but there is some fun in mystory too.
What would be so wrong in saying we can use limits as a tool, but in reality
all we can say about 1 - .999… = 1/infiniity = undefined? TBD
I would argue this an artifically imposed conclusion/definition based on a real number system which has the notion of numbers as limits put on to them. It does not have to be that way.
After all there are no natural numbers between 1 and 2, but they are not equal.
Yes. The point I am trying to argue here is that Limits simply ask us to accept that .999… = 1 by the very definition of a Limit.
Think abou the definition of a Limit. It says that if we can show that a series aproaches with any desired precision we wish to caluculate some other number, then we shall say they are equal. It does not offer any proof or justification for allowing us to do this.
It is very useful, no doubt, but it really kind entirely avoids the quesiton by pressuming they are equal to being with, at least when talking about .999… = 1 it is really circular logic to try and “prove” this using Limits.
any fool can see .999… and 1 are infinitely close to one another. The matter at hand is are they actually equal. Limit based calculus presumes this by its very definition of limits. I guess if you are one that thiks they ought to be the same, you think limits are intuitive. Like my friend Senegoid. If your like me, Limits are a tool for avoiding the quesiton and do not answer it.
I want to repeat this becuase I think bears repeating:
**In other words, people who ask this quesiton .999… = 1? Are not asking can you show with limits how we define this to be one in the same number? Most of them I can assure you are not asking this. They want to know form the purley intuitve knowledge that we all posess that .9 < 1 and .99 < 1 and .999 < 1 and .9999 < 1 and so on… how is it that at some point on further exapanison (even infinite) they become equal? **
Trouble is this isn’t consistent with the definition of 0.9999…
It isn’t clear what the notation 0.9999…9 means. (It may well be that assuming that it is a valid notation alone assumes the result of the proof.)
0.9999… isn’t necessarily the same as 0.9999…9
What is the next number after the rightmost 9 in 0.999…9 ? There are really three options here.
[ol]
[li] It is 9. In which case the above proof fails.[/li][li] It is not 9. In which case the definition has a problem:[/li] term infinity = 9, term infinity + 1 = x, where x != 9.
but infinity + 1 = infinity, by definition,
and thus 9 != 9, and the proof is internally inconsistent.
[li] There is no number to the right. Not a zero, or a nine, but no number at all.[/li][/ol]
Option number 3 requires that the definition of multiplication be extended to cope with an ability to multiply an integer by no-number-at-all. The proof assumes that such an operation is defined, and further assumes that the answer is zero. This isn’t consistent with the usual definition of multiplication.
If you did allow it, you will violate Peano’s axioms for the integers, so you must have defined your own number system that is not the same as the Integers.
Alternatively you can define multiplication of an infinite decimal to suit the proof, by making it aware of the infinity-th term. But that use of the definition of multiplication in the example assumes that you are allowed to pull off the proof, so is circular. You need to define the operations on the infinite series before you try to perform the proof. Without doing so your proof in internally inconsistent. You can redefine infinity, or redefine the multiplication operators, but you would need to change them from the usual ones. Which gets us full circle.
Also:
0.9999… = 0.99999…9999…
or indeed = 0.9999…9999…9999…9999…9999… ad infinitum. (since infinity * infinity = infinity)
Trying to say 0.9999… = 0.9999…9 and not = 0.9999…99 or 0.9999…9999…9999… has a whole host of problems. Indeed I would want to see a proof that 0.9999…9 did not equal 0.9999…9999… but did equal 0.9999… (or indeed that 0.999…999… != 0.9999…) before even bothering with the rest.
Since we can have an infinite number of the pattern “…9999” you can’t perform the meta trick either. Infinity^n is still infinity.