JS Princeton: When computing a final answer, the biggest mistake you can make is not “reporting too many significant digits.” It is failing to calculate & report an uncertainty value. As long as my technician properly calculates measurement uncertainty, I don’t pay much attention to the no. of “significant digits,” as long as they’re reasonable (typically 4 to 5 for most measurements).
Let’s say I have a temperature readout that has a readability of 0.001 °C and an uncertainty of ± 0.01 °C. Now let’s say a fancy correction factor is applied which extends the digits. Does it really matter if my technician reports 5.123 ± 0.01 °C or 5.1232 ± 0.01 °C? Not really. Of course, she would normally just use the same no. of significant digits (5.123 ± 0.01 °C), but I won’t lose sleep if more were used.
Let me repeat: readability (a.k.a. resolution, significant digits) should not be used to establish or communicate measurement uncertainty. You should not waste your time worrying about significant digits. Everyone who sells data - including NIST - simply reports a “reasonable” number of significant digits when performing computations, and often reports more than needed just to be safe. (You can always round later.) No one pays much attention to it, unless of course the number of sig digits is unreasonably small. By contrast, lots of energy is spent determining measurement uncertainty…