It is funny when an accountant or manager misuses significant figures. It is pathetic on the part of anyone with technical training.
Returning to our OP, it sounds like at one point he thought that significant figures were only those to the right of the decimal point.
I’d like to make the basic, but very fundamental, point that all digits are counted. The location of the decimal point is immaterial.
I did not intend to give that impression. For example, if something costs $12,899 we often say it costs $13,000.
The one I see the most is phantom precision from unit conversion. Best ever example was “metric” instructions on a dried salsa mix that said to use “two 295.735 ml cans of diced tomatoes” .
Hey buddy. Watch it! ;). It’s true though. The need to get short term results seems to outweigh the need for actual results.
On the flip side, I’ve seen engineers completely ignore our instructions and then come to the conclusion that our procedures don’t work before even finishing the experiment.
The worst part is that you may well find cans labeled that way.
I’ve heard of worse. Like, a report describing a dent as being “25.40 mm by 50.80 mm”. I think I heard that one in a thread here.
Of course, the most commonly encountered example of this is the conversion of 37 Celsius (approximate human body temperature) into Fahrenheit. Most folks (in the US, at least) don’t even know that that measurement originated in Celsius.
My favorite was the road project they did in front of my house a couple of years ago. The DOT is specifying everything in metric, now (as of about a decade ago), but of course everyone (including the engineers) still does everything in feet, tenths, and hundredths, just like they’ve been doing all their careers, then they convert. They measured out the location of the roadsign that went in front of my house, and spraypainted the location on the pavement. It was written to the millimeter. The spot that showed where it went was at least an inch in diameter, and probably wasn’t even as good as ± 1" from the proper location, anyway. :dubious:
As a 1st year physics lab assistant many years ago, part of the first lab was deveoted to a lecture on reading instruments and calculating significant digits. The main points were - no more significant digits in the answer than in the worst reading; and errors had only 1 (one, uno, une, eins!!!) significant digit. The point was NOT to be anal about the numbers, but to say “we are truly confident the answer lies within this range”. If the range is a trifle larger than you like, well, that’s math…
that might just be really good quality control.
Hmm. This is not what happens in the real world.
As mentioned above, your primary (and first) question when making any kind of measurement involving a continuous parameter should always be, “What is the uncertainty of the measurement?” To determine this, you will need to look up the uncertainty for the instrument(s) you’re using, and then combine them using statistically-derived rules. Google “propagation of uncertainties” for more info. The uncertainty for each instrument can be found in its latest calibration record. If you don’t have a calibration record or certificate, you shouldn’t be using the instrument in the first place. Do not perform any rounding or truncating of digits while making measurements; record all digits and enter them into Excel or whatever.
Once you have calculated your final answer of whatever it is you’re measuring, and performed your uncertainty analysis (again, using full precision), you next want to look at the uncertainty value you’ve calculated.
Generally, I use 1, 2, or 3 significant digits for the uncertainty value. There’s seldom a reason to use more than 3. After all, the uncertainty value itself is, well, uncertain. So if your uncertainty comes out to be 0.00326724152 V, I would probably just say it is 0.0033 V.
Next you want to look at the final calculated value. If, for example, your final calculated value is 12.5892347621563 VDC, I would probably say it is 12.5892 VDC, or perhaps even 12.589 VDC. This is based on the uncertainty value of 0.0033 V.
What instrument can truly measure to 6 or 10 digits of accuracy? For example the earth is 40,000km or 40,000,000m circumference, give or take. 8 digits of accuracy is “around the world, give or take a meter”. What sort of instrument then says, “plus or minus 189 meters”? That sounds more like it was plus or minus 200 yards converted to metric.
Another rule, for exanple, was that you could eyeball the analog readings for the last digit - “that looks like a 3” or"5" or “7” or “8”. But the error was still half the smallest scale on the meter/ruler/calipers/whatever.
Good god, you can get me going on this all day. I import electronic products into Japan, and fight with that every day.
The worst are when something has a very rough range say 1,000 ft and gets converted 304.8 m. NO! NO! NO! That 1,000 ft could be anywhere between 600 and 1,800 ft depending on the cable and signal type.
Then there are all the signs which say “Slow 8 km/hr”. WTF is up with that? Obviously someone realized it was a bright idea that slow signs in America said 5 mph, so they converted to km, but forgot to think. Five mph has so significance, it’s arbitrary, so just make it 10 km/hr. It not like you are suddenly going to run over kids because you’re going 2 km/hr faster. (That’s 1.242 mph for those who aren’t familiar with metric.)
And continuing the rant, I was forever fighting with US marketing at the company because they would take US pricing, for example $5,995 and convert into yen at that day’s rate and come up with numbers like 498,784 yen. I supposed to sell that?
Now we take you back to your regularly scheduled programing.
The NIST page on standard uncertainty is consistant with what crafter_Man wrote.
The electron and proton mass pages both show two digits in the standard uncertainty of those values, with about nine significant figures given (including the two uncertain ones).
Quoth Crafter_Man:
No, your primary question should always be “What is the distribution of possible values of this measurement?”. There are many simple distributions which can be characterized by a single “width” parameter, but those distributions can be very different from each other, and that’s before even getting into more complicated distributions which can have a skew or bimodality or all sorts of other wrinkles.
For instance, if I had a measurement with an “error” (characteristic width of the distribution) of some value, but then it turned out that the actual value was fifty times the error off to one side, what would you say? If you were assuming that the “error” I quoted was the standard deviation of a Gaussian distribution, you’d tell me that I was an incompetent scientist, and clearly didn’t know how to use the equipment, or made a colossal blunder in the calculations. But if you knew that the measurement actually had a Lorentzian distribution, then the proper response would be to just shrug your shoulders and say “Eh, that happens occasionally”.
Can you provide an example of a measurement with Lorentzian errors? I can’t imagine any.
The simplest example is an energy measurement in a quantum mechanical system, with no other noise sources besides the inherent quantum uncertainty. Pure spectral lines are Lorentzian (though in practice are often convolved with things like Doppler broadening and magnetic splitting).
In other contexts, there are many other measurements with similarly fat-tailed distributions, but I don’t know if any of those are exactly mathematically Lorentzian. For another example of a fat-tailed distribution, though (which also happens to be hugely asymmetrical), consider anything involving the tangent of an angle very near 90 degrees (as is done, for instance, in trigonometric parallax distance measurements in astronomy): A Gaussian (or other well-behaved distribution) error in the angle measurement can lead to an error distribution on the distance calculated that’s arbitrarily fat-tailed, even infinite in width, on one side.
In the world of physical metrology, we simply talk about “uncertainty.” Hundreds of metrologists at NIST and PSL’s have spent their entire professional careers trying to come up with better ways of calculating & quantifying measurement uncertainties. Not surprisingly, much of it has to do with distribution functions.
The values in my example (12.5892347621563 VDC) was not a reading directly reported by a DVM; it was assumed to be a calculated value based on some formulas in Excel.
Having said that, we have some pretty impressive instrument here in the lab when it comes to resolution. Our Keithley 2002 DVM has a readability of 1 nV on the 200 mV scale. And our ASL F18 thermometry bridge has a resolution of 0.003 ppm. (ASL’s F900 bridge has a resolution of 1 ppb.)
Indeed, that is better.
/shoots self in head.