Significant digits in different number systems

Suppose you have measured the length of a widget and determined it is 1234.987 cm +/- 0.001 cm. There are 2.54 cm in one inch. To convert the measured value into inches you would divide by 2.54 and get 486.2153 in +/- 0.0004 in. Some people might naively think that the converted value should be reported as 486 in because there are only three significant digits in 2.54. However, an inch is defined as 2.54 cm exactly. In this scenario, 2.54 is one of those rare, “pure” numbers mentioned in the last few posts.

If needed, you could use more than one data type, and convert between rational (and rational integer) numbers and floating-point numbers when necessary. Then you do not have to worry about the “precision” of rational numbers:

This is SBCL 2.1.1, an implementation of ANSI Common Lisp.
More information about SBCL is available at <http://www.sbcl.org/>.

SBCL is free software, provided as is, with absolutely no warranty.
It is mostly in the public domain; some portions are provided under
BSD-style licenses.  See the CREDITS and COPYING files in the
distribution for more information.
* (setq x (/ 1 7))
; in: SETQ X
;     (SETQ X (/ 1 7))
; 
; caught WARNING:
;   undefined variable: COMMON-LISP-USER::X
; 
; compilation unit finished
;   Undefined variable:
;     X
;   caught 1 WARNING condition
1/7
* (type-of x)
RATIO
* (expt x 100)
1/3234476509624757991344647769100216810857203198904625400933895331391691459636928060001
* (coerce x 'float)
0.14285715
* (coerce x 'double-float)
0.14285714285714285d0

The simple rule of significant digits and error is - error is only one significant digit.

Logic is simple - if you know the error to several digits of precision, then you also know the number to that level of precision. If I can measure something the error is half the scale. I can measure an item to 1/10 of an inch, then I cannot be sure what the hundredth’s digit is. I makes no sense to state the amount closer that that.

If I measure 2.4 whatchamas, then I see on the scale it’s closer than it is to 2.5 or 2.3; so the error is +/- 0.05; If I measure something as 1/7 inch, and that’s the best I can do, the uncertainty is not "it could be between 1.4825 and 1.4826… It’s somewhere between 1.4 and 1.6, since it could be 1/14 either way.

Converting bases does not change that - convert the error and round to 1 digit. If I measure something to a limit, or average several things and find a confidence range, it is meaningless to express the error in more than 1 significant digit.

When converting decimal fractions between bases, ther will always be an uncertainty. 1/3 as a fraction is maybe o.33, since the extra digits are excessively precise - what’s the point? But if we insist on, say, 8-digit precision, then any conversion of a fraction to a decimal will have an uncertainty of 0.00000001 since the result may be a repeating decimal that needs to be rounded. Convert it to base 6, 8 digit, and your error in the conversion process with be the same 8th-digit uncertainty The rule with multiplying (as base conversion would be) is to add the relative errors. 1/100000000 plus 1/1679616 is the resultant error range when converting to base 6.