I’m interested in numbers that are extremely close to, but less than, 1. Especially, what are the practical considerations in doing numerical (as opposed to symbolic) calculations with them, and in representing them?
I’m working with dimensionless temperatures in unsteady state heat transfer, and they arise there. They can be represented as for example “1 - 1e-80”. In the same problem, numbers very close to 0 can also arise, for example “1e-80”. To deal with both of these in the same problem using unthoughtful techniques, one might wish for at least perhaps 100 digits of precision throughout calculations. In physically significant practical examples it may be trivially easy to recognize that recasting a problem so that we represent the difference between the original number and 1 fixes everything, but this isn’t always the case in making sweeping generalizations that must accommodate vastly different physical examples.
Perhaps similar problems arise in statistics, where probabilities vanishingly close to 1, as well as those vanishingly close to 0, arise.
One intriguing case of practical calculation with such numbers is in some HP calculators that included a “LN(1+X)” key for taking the natural logs of numbers that are almost 1 (returning a negative number very close to zero, for example -1e-80). In this case numbers slightly greater than 1 are also important.
An intriguing case of practical representation for real physical systems is representing purity of a chemical reagent or silicon crystal, where one might say it is 99.9999% pure, or “six nines” or “6N”. 99.99997% might be represented as “6N7”. If one does not want to type many “9” digits and then try to read them like reading notches on a stick, one must do to the “9” digit what scientific notation does to repetitious strings of zeroes.
As I recall, casting the formula for determining the distance from a point above the ground to the horizon, one of the seemingly-critical values is so close to zero (in proportion to another one) that it can be ignored, without having a significant effect on the calculation, which greatly simplifies the trigonometry. But I don’t have the energy to think it through right now.
I think you have got the best mechanism covered already. Store it as the difference from 1. With many modern OO languages - especially those with operator overloading it isn’t too hard to build a system that automatically manages the representation. It can be crafted to always keep the precision maximised for the range of values that you expect to legitimately come across, and do so without too drastic a loss of performance.
But, in general you have to always spend some time with program code to ensure you don’t get into trouble. Accidental loss of precision with careless expression coding is an all too common issue. It is numerical methods 101 to do this, but a depressing number of students tend to sleep through those classes.
You may be remembering that sin(x) ~= x for small x measured in radians.
As applied to that problem … From a height of, say, 10, feet above the ground you’re looking every so slightly below local horizontal to the horizon several miles away. Both the angle and its sine are substantially zero and can be mostly ignored. Atmospheric refraction and local non-shericality of the Earth are probably bigger factors.
One ‘trick’ might be to multiply all numbers by 1000 or 10,000 and correct the result accordingly. This is the opposite of the method used in financial reports where the numbers are often shown as thousands or millions.
This will gain you nothing at all, if you’re already using floating point numbers (which, in the OP’s case, you almost certainly will be).
There is no one-size-fits-all answer for the computer, here. This is a problem for the programmer, not for the computer. You need to recast your problem so the loss of precision never occurs, and knowing how to do that depends on detailed understanding of the original problem.
This may be useless for your application but one technique for very large numbers is to break the number into portions of different scales. Just as something could be said to be 1,000,000,001 millimeters away, it could also be expressed as 1 kilometer and 1 meter and 1 millimeter away. It can be useful for recording very large numbers precisely but I don’t think it helps much if you have to perform calculations on the values.