74westy: I seem to recall a paper from the 1980s that reported inconsistent OLS results between even fairly established software programs.
No, storing it as double avoids the issue entirely, as the problem is in the display of figures (in Stata). You can handle floats if you take time to specify the proper format for each and every variable. But that involves some fiddly though not complicated coding. But yes, there’s rounding error in the ~16th digit for double precision.
Interesting. I see from wiki that half-precision is frequently used in image processing and neural networks. Are the calculations done in double or float?
Presumably that’s easy to change, though, and the problem is that that the default is inappropriate for single-precision.
Those would be my areas of expertise.
Internal calculations are done with a mix, though almost never with doubles (just half+single). A dot product is a common operation: a_1 b_1 + a_2 b_2 + ... + a_n b_n . The multiplies can be done in the lower precision, but the additions tend to need more if the dot product has many terms. You can still reduce the precision in the end, but the intermediate calcs need more (similar to what Gould said above).
You can change the default storage type from float to double (which I’ve done). You can make a habit of always explicitly defining the storage type (ditto). You can reset the format of any variable individually. I’m unclear about whether you can change the default format for a given storage type.
I wrote a program like that once, except for orbital measurements, not surveying. It was a lot of fun, but also a lot of work to get it to work right. I think my program was one of about three in the whole class that didn’t barf on the test data (which was deliberately chosen to be uncomfortably close to some coordinate singularities).
I’m too mathematically naive to know this answer, but could an infinite length string of digits contain an infinite length substring of just one digit? Apropos to this question, could there be an infinite length substring of zeros hiding someplace in the decimal expansion of π?
I’d say no, if there were an infinite string of zeros then pi would be rational.
Of course one has to careful about “infinity”/“infinite”.
Does an infinite string of ones followed by an infinite string of 2’s even make sense?
I took you question to mean it is only zeros.
That’s part of my problem: not knowing the rules of “infinite”. I can imagine things that don’t make sense according to the clear rules of any form of number theory, and I’m wondering if “infinite substring of infinite length string” can make sense.
I know it is allowed in certain cases, like a bounded infinity (the number of real numbers between 0 and 1, for example) but but I don’t know the rules and conventions for an infinite enumeration like a decimal expansion
Not if you mean that the “infinite length substring” is consecutive. Because if it’s infinite in length, it never ends; and if it never ends, there are no other digits after it. So then what you would have would be a repeating decimal, which is rational.
But if you mean something like “every third digit, starting with the twelve millionth” I think the answer is “Yes, it could, but then it wouldn’t be normal.”
This is a case where are have to be careful about what we mean. There’s no infinite substring of zeroes in the sense that at some point it starts having a bunch of zeroes, and they go on to infinity, and then “after” that they’re something else.
However–if pi is normal, then the lengths of single-digit substrings goes to infinity. That is, no matter what finite length we pick, there are single-digit substrings longer than that. A thousand digits, a million, a googolplex, TREE(3), whatever. There will be something longer.
That’s with the real numbers at least. Perhaps one could extend the concept so that the digits are indexed by the ordinal numbers, which can go “beyond” infinity into multiples of infinity, powers of it, etc.
That raises an interesting question: I know essentially nothing about the hyperreals, but I wonder if hyperreal numbers have decimal representations indexed by nonstandard integers.
How can you avoid that? If you have hyperreal numbers, then you also have hyper integers, and standard decimal expansion won’t work any more; your digits will be indexed by non-standard natural numbers.
Counting of things: applications level off at around 80 magnitudes, the number of atoms in the universe.
Possibilities: a very simple model would the factorial, which is a formula associated with the dinner party problem. For N seats there are possible N! arrangements. Around 60 guests are needed for the number of possibilities to equal the number of atoms in the universe. In a small city, arrange 10,000 households among 10,000 dwellings and you get 2.8e35659, a number taking up about 20 pages (60 characters per row, 30 rows per page). So possibilities are much greater than existing reality.
Exact repetition: Given all the possibilities, how often does history repeat itself exactly? Here we have TREE(3). The general answer is, “Much longer than the amount of time we have to work with (heat death of universe)”, though for common enough events (typically made common via a broad scope definition) the answer would be, “Oh, all the time.” The transition from TREE(2) to TREE(3) is far beyond explosive, beyond the ratio of the longest unit of time to the shortest.
We might run out of actual world computational capacity, but the real number line is long enough to handle any expansive conceptual list that you can throw at it.
Are you taking about recurrence times in dynamical systems?
When people say things like how long it will take a system the size of our universe to recur, it comes out to a mere 10^{10^{120}} years (or whatever). All this combinatorial tree stuff really has nothing to do with physics. The TREE(3) is, how shall we say, a pretty big number, and talking about how long it will take for history to repeat itself is just peanuts even compared to pretty small numbers— you are not doing yourself any favors thinking of it in those terms.
Orders of magnitude (time) - Wikipedia gives 10^10^10^123* for “The scale of an estimated Poincaré recurrence time for the quantum state of a hypothetical box containing a black hole with the mass of the observable Universe.”
*technically, that many Quecto (10-30) seconds; but as the article mentions, at these scales the unit of time chosen is trivial.
I do wonder what “a decimal number with digits indexed by the ordinals” is isomorphic to, though. It doesn’t immediately give you the hyperreals since those have both infinite and infinitesimal numbers. And I’m not sure it really works on the infinitesimal side, either, though I think it works well enough that you could support non-standard calculus with them.
Ordinals are one thing; hyperreal decimals are indexed by integers. Except now they are nonstandard integers.
(So, we can imagine a nonstandard natural number that is greater than any standard natural number, take that many zeros after the decimal point, then some more digits. The resulting number will be infinitesimal.)
I guess my thought process was that the ordinals are a lot like non-standard integers, at least in some ways. Though thinking about it more, they don’t seem to work. Suppose that you have a binary number that’s zero everywhere except that the ωth bit is 1. What happens when you multiply by 2? Then the (ω-1)th bit is 1, but that’s not actually a digit. So I guess you actually do need the hyperintegers or something where subtraction works properly.