Apparently I can’t type and think at the same time.
I entered “3”, then hit the “1/x” key, and get 0.333333333333333333 for some really long string.
Then I hit “1/x” again, and got 3.
Then I reentered 3 and 1/x, then multiplied that by 3 and got 1.
Then I manually entered some incredibly long string (40 or 50) of 0.333… - longer than the field of display. When I hit the 1/x key, it gave me 3.0000…3 – the last digit was 3.
Then I manually entered 0.333…3 again, multiplied by 3 and got 0.999…9.
The point is still the same - the calculator knows the difference between a computed 1/3 and a manually entered 1/3.
There are some hidden assumptions in what you are describing.
[ol]
[li] You are assuming that you can enter an arbitrary precision number. This is almost certainly not the case. The system will silently drop off the residual digits at some point. You could experiment and find the length of digit string that the system does input - try entering 0.333333339 0.3333333339 0.33333333339 etc and see when it ceases to give you different answers for the reciprocal.[/li]
[li] You are assuming that the precision of the printed answer is the same as the internal precision of the calculation. If the internal precision is greater, and the output mechanism correctly rounds the answer to the width of the display format, you will mostly see the observed behaviour. Again, you could experiment to find the reality here. [/li]
[li] Assuming the internal representation. Since it is a computer program on an x86 processor we would probably assume 64 or 80 bit floating point. Maybe 128 bit. However there is no reason why a more advanced system could not use a symbolic representation. So the calculation 1/3 is carried as something of the form divide(1,3), and 1 and 3 are integers. The reciprocal operation is defined as divide(1, x ), and the system is able to perform approriate manipulations so that divide(1,divide(1,x)) = divide(x,1) = x. Mathematica, for instance, will do this.[/li][/ol]
The other point is that you didn’t manually enter 1/3, you entered 0.333333333333333333333333…333333330
It is possible that his calculator uses rational numbers internally, rather than either decimal or binary, only converting to decimal for display. Rational numbers are not normally used in computers, because of the speed penalty, and because they are useless as soon as irrational and transcendental functions enter, but some languages and libraries support them, and simple calculators can afford the speed loss.
Thanks for elaborating on my point. My point was that the calculator only has a certain buffer for what you can input, and so it treats a 1/3 entered to the precision the input field will allow as a different value than a 1/3 computed from the integers 1 and 3 using the divide function. This means there is some internal memory for 1/3 different than the memory of entering 0.333…3 . Yes, I’m aware they are not actually the same value, one is an approximation of the other. I was attempting to demonstrate that the calculator knows this.
My point for entering more threes than the display field was an attempt to somehow trigger the buffer to input more, but on second pass with the sound on I realize I’m getting an error sound, so it isn’t doing anything with the additional threes. It just inputs the amount that displays.
My reason for using the calculor in the first place? I was trying to show that
0.33333333333333333333333333333333333333333
is not the same as
0.33333333333333333333333333333333333333333…
and that even calculators tend to know this.
I was also demonstrating if you multiply
0.33333333333333333333333333333333333333333
by 3 you get
0.99999999999999999999999999999999999999999
which validates the claim made earlier that I can multiply 0.33~ by 3 and get 0.99~. There’s no point where the multiplication of 3 x 3 gives a different value than 9, so you can start in the tenths place and work down instead of the normal process for math requiring working from the last place up.
For most of the people now participating on this thread, the points are silly because they are already understood.
Correct. The OP is assuming that you can use the rules of elementery school math to add two infinite series. As an example the steps to add the numbers are below 334 + 666:
334
+666
-------
You start at the right and add 4+6 and get 10. Place the zero and carry the 1
Add 1+3+6 and get 10, repeat for the 3 column and get 1000.
This isn’t possible for an infinite series, as you can never start at the right. You can’t add .333~ + .666~ by hand. BUT if you could, you’d end up carrying a 1 infinately across to the left
This is why we do math in fractions. If you try to add 1/3 + 2/3 by first converting to decimal numbers the best you can get is an approximation of 1 aka .999~
While .999~ is a great approximation of 1 it doesn’t actually = 1. There is an infinite set of rational numbers between 0 and 1. This is just one of them. Just because 1 minus .999~ cannot be expressed doesn’t make the difference 0. Just as a number can be infinite large, it can also be infinitely small. Infinities are by definition impossible to express.
You are correct to be cautious about adding infinite series. However under certain circumstances they can be added. Decimal fractions, treated as infinite series, are one case where they can be added without problems. You can add any finite number of decimal fractions together, and get a valid result. So adding
.333… + .333… + .333… = .999…
is valid, and you don’t even have to do any carry operations.
(If you do need to do carry operations, it becomes a bit more complex, but the arithmetic is still valid).
No, .999… is exactly equal to 1, in the same way as .333… is exactly equal to 1/3. If we were to accept your argument that there’s an “infinitely small” difference between .999… and 1, you would also find that there’s an “infinitely small” difference between .333… and 1/3.
Re-read post #48, which explains in elementary school terms how, by definition, mathematicians interpret infinite decimal notation. By definition, “.999~” means “The number which is greater than or equal to each of 0, 0.9, 0.99, 0.999, etc., and less than or equal to each of 1, 1.0, 1.00, 1.000, etc.”. This number is 1. By definition, “.999~” means 1.
“1 minus .999~” is clearly expressible, by such a phrase as “1 minus .999~”. It does happen to be zero, but even if it weren’t, it would still be expressible.
This is the flaw in the logic of this topic, simply saying 2+2=5 doesn’t make it true.
Walk through the mechanics of this and demonstrate it. Adding from left to right will only give you the approximation .999~ not 1.
Also I stated above that .999~ is a rational number but this isn’t true either. A rational number is any number that can be expressed as the quotient a/b of two integers. This isn’t true for the infinite series of .999~ It is true for .333~ and .666~. It is impossible to add two rational number and produce a non-rational number.
Sorry for rehashing this, I saw this in the archives and couldn’t see that .333~ + .666~ = .999~ was a self evident statement.
This is an absolutely converging series, with the limit 1/3. This means that the sequence of partial sums S[sub]n[/sub] converge.
Now, if sequence S[sub]n[/sub] converges on a limit S and sequence T[sub]n[/sub] converges on a limit T, then the sequence (S[sub]n[/sub] + T[sub]n[/sub]) converges the limit S + T. That is, in this case,
as n goes to infinity, is a rational number: it’s the fraction 1/1.
Proof: The nth partial sum differs from 1/1 by the fraction 1/10[sup]n[/sup]. That fraction can be made as small as you like by making n as large as necessary, so it converges on 0.
Post #48 gives the definition by which mathematicians interpret infinite decimal sequences. And it is quite simple, too. You can round a decimal sequence up or down at each position; an infinite decimal sequence, by definition, means, in standard parlance, the number which is >= each of the rounding downs and <= each of the rounding ups.
When mathematicians say “0.999…”, they mean “The number which is >= each of 0, 0.9, 0.99, 0.999, etc., and <= each of 1, 1.0, 1.00, 1.000, etc.”. That is what the notation definitionally means to them. Of course, 1 satisfies this description. Accordingly, “0.999…” means 1, in standard mathematical notation, by the definition of this standard notation. Nothing more needs be said; I will keep repeating this as necessary.
This definition seems awkward to me. It isn’t defined using something like an epsilon-delta proof? Show that for any epsilon > 0, there is always some N such that for all n > N the n-digit truncation is within epsilon of 1?
You can give various equivalent definitions by which to interpret what an infinite digit sequence means. I tried to pick one that was most elementary; indeed, one which could be explained to an elementary schooler. Namely, “A digit-sequence k means the number which is >= each of its rounding downs (that is, the rounding-downs at each particular digit-position) and <= each of its rounding ups.”
An alternative definition, which you are now referring to and which has been invoked by others throughout these discussions as well, is “A sequence k means the number which is the limit of its corresponding sequence of rounding-downs”, with limits in turn being defined in an epsilon-delta manner (so that the final definition is "A sequence k means the number n with the property that for any positive distance epsilon, there is some digit-position delta such that for every digit-position delta’ beyond delta, the rounding-down of k at delta’ is within epsilon of n). This is equivalent to my definition, but, I think, more difficult to immediately understand, particularly given the struggles some people have with epsilon-delta definitions. (That having been said, there’s nothing preventing you from teaching this one to an elementary schooler as well, should you like…)
Accordingly, for the purpose of having a simple, clear, concise, and immediately graspable definition indicating how mathematicians interpret infinite digit sequences, I am sticking with the one I’ve been giving.
You can use a variety of definitions for infinite decimal fractions. Which one is best to use depends on how you define the real numbers given the rational numbers. I think that the most popular way of defining the real numbers is using Dedekind cuts, and I suspect that works well with Indistinguishable’s definition of decimal fractions.
Is 1/3 >= each of 0, 0.3, 0.33, 0.333, and so on? Is 1/3 <= each of 1, 0.4, 0.34, 0.334, and so on? If so, then 1/3 is 0.333…, by the above definition of how mathematicians use the language of infinite decimal notation. That’s all there is to it. Keep pointing out that the standard definition is the above one till it’s beaten into them. They can define and explore other similar numerical and notational systems if they like, but there’s really no room to argue over what the standard system is.
So are you saying that .333~+.666~ does not equal .999~? Then what does it equal? You are either claiming that or you are contradicting yourself. By the definitions I know of mathematics all three numbers are rational as all can be expressed as fractions of integers (1/3, 2/3, and 1/1 respectively). But you seem to be arguing that only the first two are true. Is that the case?