When Converting Between Numbering Systems, How Do You Adjust For Significant Figures?

Suppose you had a quantity in a base-10 numbering systems with a given number of significant figures.

Now, if you were to convert this quantity to a different numbering system, such as hexadecimal, the number of significant figures should also change in most cases. If, for example, you converted from base-10 to binary it would appear that you would need far more digits to represent the same level of precision. I’m tempted to assume that you would adjust the number of significant digits by just multiplying by 10/2 = 5, but I suspect the actual conversion may not be that simple.

Is there some formula that enables you to account for the change in the number of significant digits when converting between numbering systems?

Thanks.

I don’t know if there is a rule for this. Significant figures are pretty much only used in classroom situations; in the real world, error propagation is the standard method. Since base conversion doesn’t change the value of a number, you probably only need to convert the error as well.

You can always use the number of significant figures to determine an actual margin of error, and convert that into the new base as well.

Just as a quick example, suppose that a given quantity is known to be 1.24 in decimal to three significant figures. That means the exact value of the quantity must lie in the range 1.235 <= x < 1.245. 1.235 in binary is 1.001111000011…, and 1.245 in binary is 1.0011111011… . So 1.001111 must be a correct binary representation of the quantity with 7 bits of precision.

Unfortunately this isn’t always guaranteed to work well. If the upper bound on the range had been a little bit larger, we would have gotten 1.01000000100… or something like that in binary, and then our binary representation would technically only have two bits of precision. This is an unfortunate side effect of converting from one base to another, and I don’t think it’s avoidable if you want rigorous accuracy.

On the other hand if you just want a quick-and-dirty estimate, try the formula (log 10/log 2)(n-1) + 1. Roughly speaking, (n-1) digits to the right of the decimal point (or all of the significant digits except the leading one, in scientific notation) become approximately (log 10/log 2)(n-1) digits after converting to binary. Note that log 10/log 2 is approximately 3.

Another way of looking at it would be to say, e.g., 1.5 to two significant figures is between 1.45 and 1.55. In binary
1.45 = 1.01110011001100…
1.5 = 1.1
1.55 = 1.10001100110011…

So it’s between 4 and 5 significant figures, i.e. between 1.100 (which means between 1.0111 and 1.1001) and 1.1000 (which means between 1.01111 and 1.10001).

From the accuracy viewpoint, if you are accurate down to the 1000th (0.001D) in decimal, then you should be accurate down to the 1024th (0.0000000001B) in binary. Or the 4096th (0.001H) in hexadecimal… though only because to the 256th (0.01H) may not accuractly reflect your numbers.

However, the precision angle makes things a little strange. Something stated with an explicit accuracy down to the 1000th has the implicit precision of plus or minus five 10000ths. So. “0.001D” really means “0.0015D to 0.0005D” or (0.0000000001111B to 0.0000000000101B). But if you are truly only accurate to [sup]+[/sup]/[sub]-[/sub]0.0005D, it would misrepresent your precision to use 13 sigfigs in binary.

So, the question becomes why your figures are significant. If it’s because it represents and engineering tolerance or an analytical limit 1) why the hell would you be converting it into binary? 2) Figure out the actual error bounds and pick the appropriate number of digits to represent your real world precison in a given base. If it’s just an arbitrary cut off point or the digital limit of your calculator… then you’re really only concerned with the accuracy half of the concern, in which case the first paragraph is all you need.