I play around a lot with calculators. And I’m not ashamed to admit. You can learn a lot that way, believe it or not. I have even shared some of these things with the SDMB in the past.
This next one is equally as bizarre as the rest. When you take the square root of .111111… you get .3333333… naturally, since the square root of one-ninth is one-third. But one time, just as a lark, I thought I’d square root .11 alone. Then .111 (again, only three digits), etc… Long story short, you get the following pattern: 0.33333333331666666666624999999998. As you can see, the .33333… pattern is followed by an intrusive 1666666… pattern, and a 2499999… pattern (presumably leading ultimately to 25, I assume).
It happens with other numbers too. Take .44444… The square root of this repeating decimal is .66666…, two-thirds, naturally. But when you do the same thing, you get 0.66666666663333333333249999999996. A “333…” pattern emerges, and then again that “25” pattern.
It doesn’t just happen with these. Consider .9999… That equals one, of course. But when you do the same, you get 0.99999999994999999999874999999994. Now, you get “5” and “75” as your hidden pattern.
Also odd, is that these patterns are “put off” until infinity. Which I guess is permissible, even if they are never part of the actual number.
What is the explanation for these strange patterns? (BTW, I put is in MPSIMS, because I assume I just sharing something, albeit odd and mathematical, with you all.)
The simple explanation is that calculators use a finite number of bits to represent numbers, and they use approximations of functions to get “good enough” answers to problems using a finite amount of hardware, a finite amount of software, and a finite amount of time. A full explanation of floating point numbers (how computers represent decimal fractions) would take not only a long thread, it would fill a university-level course, probably one called “Numerical Analysis”.
There is the binomial approximation (1 - x)[suP]a[/suP] ≈ 1 - ax
In the case of sqrt(0.111111) we have:
sqrt(0.111111) = (1/9 - 1/9000000)[suP]1/2[/suP]
=1/3(1 - 1/1000000))[suP]1/2[/suP]
≈ 1/3(1 - 1/20000000) = 0.3333333333… - 0.0000001666666666…
= 0.3333331666666…
So that explains the “1666” bit.
The “24999” bit will be explained by the next term in the binomial expansion, and so on.
I think this is it. I tried the Arbitrary Precision Calculator at apfloat.appspot.com to calculate:
sqrt(0.111000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000)
and the result it returned was:
0.333166624979153636711908687038596556216762938123024462459526991876854935570345245438171979288344173436908078192.
[Edit after the fact: for some reason pasting numbers into SDMB put a space into one of the strings, but it doesn’t appear that way on the calculator web page.]
I have to confess and add that there is another reason why this random discovery fascinates me. Even when you don’t square root multiple ones, even when you just square root .1, you still see the pattern: 0.31622776601683793319988935444327. And it clearly isn’t just the quirk of one calculator. It is a pattern built into the actual number. My question: could you use this to calculate irrational numbers, like √.1?
Slide rules use an interesting method of calculating large and complex numbers. They replace massive amounts of multiplication with simple addition. Could this hold true for other numbers too?
Here is the pattern again: √.11111=0.33333166666249997916653645742187. As you can see, there are even more patterns that follow the initial “16666…” and “25”.
Could I be on to something here? And could I have discovered something amazing too?
No, it a rounding error built into the way all (or most) calculators work.
You can use your calculator to calculate approximations to irrationals, but it will not you a correct, fully precise answer. (Indeed, it could not represent it on its screen, even if it “knew”.)