I smoked quite a lot of dope when I was young too.
I have no idea what point you’re trying to make here.
Look, it’s really simple. In the standard version of real numbers, 0.9… = 1.
You’re perfectly free to design a different system of numbers where 0.9… <> 1.
However, you might be surprised to discover that it behaves in strange ways that you didn’t anticipate.
For example, using standard real numbers:
10 * 0.9… = 9.9…
10 * 0.9… - 0.9… = 9.9… - 0.9…
(10-1) * 0.9… = 9.9… - 0.9…
(10-1) * 0.9… = 9
9 * 0.9… = 9
0.9… = 1
How would this derivation work out in the new number system you want to use?
Specifically, what is 10 * 0.9… equal to in your number system?
Hello Hamster King from LA
Thank you for your interest in our product.
Please sign on the dotted line and agree to a warranty of 10 years.
OK, you have to go back and read through the thread but I will try to give you the condensed version.
There is no number .999… in the set we would be working with.
Because by definition the set would only contain numbers whose decimal representation contains a finite number of digits. So for example, we could simply define that finite number to be something that would allow for a convenient way of expressing what is presently thought to be the smallest interval of space itself, the Planck Length.
We could peg it at the Planck Length
or set it to a smaller interval if we like to extend the warranty on the system but either way the bottom line is that we have to make the decision that we believe that the real world is finite and that when we then go to make bridges and buildings using this system they aren’t going to collapse because the mapping of that which conceptual to reality is not sufficiently symmetric.
All of this has to do with the practical side of defining the needs of the set that is going to be used in applied math.
Needless to say the needs of Physicists are far different than the needs of pure mathematicians.
Calculus over the reals seems to work pretty damn well for Physicists so this entire discussion, to me, is less about the practical considerations than it is about the more profound aspects of the concepts of infinite precision, infinitesimal, and infinity. Just as we know that F=ma is an idealized approximation that works very well it seems that that might be the case for the Reals. Which makes them not so real and not so necessary in the realm of applied math.
I will let the pure and applied math folks here dissect and toss out the tonsils from the above if they so desire but that’s the way I see it.
Hope that clarifies my understanding of the practical side of this discussion.
Yeah I guess anybody who stumbled on this thread and the first thing they saw was that question would probably make that inference.
Hmmmm … so you’ve decided that you’re using a system of numbers where repeating decimals aren’t allowed.
So, that means that 1/3 is undefined? Like how 1/0 is currently undefined in normal math? That’s seems like an awkward property for system of numbers to have, but okay … .
But then 1/3 would be defined if you switch notation to ternary. 1/3 in ternary is 0.1. So now you’ve created a system of arithmetic that gives different answers depending on the base you use to perform calculations. Is that really what you were hoping to accomplish?
Absolutely not, 1/3 is very simply defined up to however many decimal places have been defined in the axioms of the set’s definition. So its there, but only up to so many decimal places. If we set the limit to ten. Then there are only ten ways we can write down 1/3 which are .3, .33, .333, up to .3333333333, that’s it. Think about it this way… you work at a job where all you do all day is measure parts in a factory, the factory only manufactures large parts say the size of tennis rackets. They only need the rackets to be measured down to the nearest millimeter. Somebody asks you to go out and buy rulers to measure them. The more precise the ruler, the more expensive. So you buy the least expensive ruler with the level of precision that meets the requirements. Which would be a ruler with a micrometer scale. The micrometer scale defines the limits of precision which are sufficient for your needs.
In your system, what is 0.33 - 0.3? Is it 0 or 0.03?
The numbers I like to use aren’t defined in terms of micrometers. They’re defined in terms of sets. How do you apply these precision rules to sets?
Your number system is getting weirder and weirder.
What’s 0.3333333333 + (0.3333333333 / 10) equal to?
In normal math it’s 0.33333333333, but you’ve just said that repeating decimals can’t repeat more than 10 times in your system and the answer in normal math requires 11 3’s to represent.
As I said above, you can certainly create a system of math that has properties like this, but you start getting some really weird results.
Valmont314’s definition of numbers also fails to obey the distributive property; he has defined numbers such that a*(b+c) is not always equal to ab + ac.
Seems like an awful high price to pay just for a philosophical point.
Er… what? Isn’t it 0.36666(etc) or am I misinterpreting the expression?
- .03, it is an axiom of the number system that the trailing zeros are there up to 10 digits as placeholders
- The numbers you like to use are not relevant to applied mathematics and therefore are irrelevant when defining the set. As long as we set the minimum interval such that every known magnitude in applied math can be represented as a multiple of it that is all we care about.
Look guys, my post was not about defining a number system. That has not one iota anything to do with my interest in this topic but if you can simply explain how this leads to two members of the set having the same value (aside from the trivial case of .3 = .30 or whatever than please do so).
Does it not obey the distributive property if you set the limits of precision higher than what is needed in the real world and then simply discard those digits above and beyond what is defined as the limits of measurement in the number system?
Much in the same way that one can simply choose to discard interpolated results when measuring with a ruler?
Bleah … I had to run off to a meeting and I wasn’t paying attention to what I was typing. What I meant was:
What’s 0.3333333333 + ( 0.0000000003 / 10 ) equal to?
I was trying to create a situation where numbers with fewer than 10 repeating digits added up to a number with more than 10 repeating digits.
Of course it is. You’re defining a number system that has the property that 1 <> 0.9… . We’re pointing out that you can’t do that easily or intuitively without creating a system riddled with odd inconsistencies.
Now you say you’re creating a number system that contains the number 0.9999999999 (10 nines) but not 0.99999999999 (11 nines). The problem with such as system is that it’s not closed with respect to standard arithmetic operations, i.e. I can easily construct expressions in that system that yield results that are not valid numbers.
Scroll up and see what I said to Trinopus…I am really curious to see what he is going to cook up on this.
In general, no, even the real world doesn’t work like that.
You cannot assume that rounding errors in calculations will not propagate and grow. You can see this pretty easily by taking a limited-precision calculating device (like a pocket calculator) and repeatedly multiplying by a number, then repeatedly dividing by the same number.
For example 5 * 123456789 * 123456789 * … * 123456789 / 123456789 / 123456789 … / 123456789. With enough steps you’ll get back an answer that isn’t 5. However if you change the order of operations so each multiplication is immediately followed by a division, then you get the correct result.
It’s been fifteen years since I took my numerical methods class in grad school, so I’m a bit rusty, but there are a lot of tricks you need to do algorithmically if you’re going to get the correct result in a limited precision system. You cannot just blithely assume that distribution (or commutativity, or associativity) still holds.
Again, it may be more trouble than it is worth, but it might demand that there be a functional relationship between the limits of precision, the limits of measurement and the minimum interval size. Also, much in the same way certain calculations lend themselves to certain radix systems they might lend themselves better to systems defined with finite precision…which all again, has nothing to do with the purpose of my post…I was simply trying to point out why it is that the elements where colocated…not to embark on an effort to derive a new system.
Nope; because of the way you cut off digits, it will make a difference when you multiply by some other number. If you cut off measurement at, say, ten decimal places, and then multiply the number by ten, you now have measurement that is effectively cut off at nine decimal places. This rounding error can cause the distributive property to fail.
We see this fairly often in opinion polls, where the numbers don’t add up to 100 because of rounding error.
You do agree though that if I set the limits of precision to a Googleplex and
take results out to 5 decimal places I am ok for 3 operations do you not?
The system has to be tailored precisely and rigorously…if you would like to do that go for it.