Accuracy, precision, something else?

Consumer-targeted digital measurement devices could also have sneaky, dishonest program features that can confuse the issue of precision. I had a bathroom scale like this. It read out in tenths of a pound. Step on, it gives you a weight, step off, it re-zeros itself. But repeat weighings I did on myself always repeated in every digit. This seemed unlikely. So I tried alternately weighing myself holding a book and not holding the book. Now the repeat measures (for me without the book, or, for me with the book, but never mixing the two) are scattered over 2 or 3 counts typically in the last digit.
I am sure they programmed the thing to repeat the last reading if it seems quite close (and maybe if it was only a few seconds ago). They probably figured there’d be unfair complaints if the scale didn’t repeat itself completely.

Whereas my digital scale will be off by a pound or two from reading to reading, and I can’t imagine there’s user error there.

On of my products is a digital pressure gauge. Its accuracy is specified at .75% F.S.
At 60 PSI, .75% is .45 PSI. So, in theory, the gauge could read .45 PSI at zero, and still be in spec. But, that would make people crazy, so I just take any reading below .3 and display it as zero. Nobody is interested in pressures that low anyway, so I’d rather not get gauges back for re-cal because they display 0.1 PSI at zero.

I could put in a “zero” function, but I don’t want that to mask a real drift issue.

I have one infrared thermometer whose output is displayed to a tenth of a degree, so that if you point it at something it will display 33.4°C or whatever. However, if you look at the user’s manual, the accuracy varies from 1.5°C (best case) to 3° or even worse, and repeating the same measurement is only ±1°C. Perhaps the OP’s thermometer is from the same company…

@DPRK just above …

Now that’s definitely false precision. Displaying apparent sigfigs that they know are not significant, but rather merely noise. Which of course encourages the growth of fake sigfigs as more measurements are taken, added together, then divided to produce a floating point average displayed by Excel or a calculator to 12 digits past the decimal point.


My scale has the same behavior. But although weight is displayed to an apparent precision of 0.1lbs, the actual minimum increment of change it will register is 1.0kg or 2.2 lbs. Which I learned by experimentation.

At one time I was stable in weight but weighed myself every morning in the same repeatable test condition for a couple of years. And recorded the weight. So I had lots of test data. This was motivated by a large earlier weight loss and the desire to keep it off through dietary and exercise discipline during that time.

The data was unequivocal: For several days straight my reported weight would not vary by even 0.1 lbs ~= 1-1/2 ounces. Not credible for a ~150lb human. Then my weight would jump or drop the equivalent of a kilo or a bit more, stay there for several days, then change to a different number near my old number.

Over the course of weeks my indicated weight would slowly “breath” up and down over a range of ~3-4 lbs. But was “sticky” until it jumped by a kilo or slightly more.

I found that if I weighed myself, then did it again holding a 5# weight, then again without the weight, all in the space of the same turn-on of the scale, the first and 3rd readings were different. And the plot of third readings was realistically believable every day, showing the slow breathing, with a bit of daily noise superimposed. And with none of the stickiness of the earlier data.

I suspect their goal was to avoid annoying dieting customers who would fret about minor fluctuations. By making the scale readings “sticky”, the customer got only occasional feedback good or bad.

Lying Bastards.

Sorry, my bad. Your post was clear, don’t know how I read it any other way.

Yep! This exactly!

Really? You think they were trying to make the customer feel good, as opposed to making their product look good (in a particular way that casual users would rate highly but you and I would object to)?

If the accuracy is quoted as %FS ( % of full scale) then the accuracy will be .75% of the max range given. So if the pressure gauges is 0-1,000 psi the accuracy will be +/- 7.5 psi at 60 psi , 200 psi or 1,000 psi

If the accuracy is given as “% of reading” then the accuracy of measurement would be as you calculated ( 0.75% of 60psi = 0.45psi)

You may also see some accuracy statements given as +|- %of reading +/- % of full scale. The particular gauge physics would dictate what is appropriate.

We could add resolution into the mix as another metric of sensor performance. That is usually taken as the smallest change the sensor can detect. With lots of digital sensors it is often given as the number of bits used to send the measurement out. So a 16 bit 30,000 psi sensor would have a given resolution of .45psi.
If the accuracy and repeatability ( precision) are both .05% fs ie 15 psi the 0.45psi resolution is not real.

Accuracy, precision, etc. It’s a confusing topic, with dependent definitions.

I used to work in a temperature metrology lab for the Department of Energy. We calibrated thermocouples, liquid-in-glass (LiG) thermometers, platinum resistance thermometers (PRTs), thermistors, bi-metallics, etc. It was there that I leared the seasoned metrologists frowned upon the terms “accuracy” and “precision.” They preferred “tolerance,” “uncertainty,” “readability,” and “repeatability.”

Tolerance is what’s stated in the manufacturer’s spec sheet. It’s sort of like the inverse of accuracy. As an example, a PRT might have a tolerance of ±0.05 °C. This means the PRT reading is guaranteed to always be within ±0.05 °C of the “true” temperature (assuming it’s undamaged and properly calibrated).

Uncertainty is sort of like tolerance, but it won’t be stated on the spec sheet. You need to determine uncertainty, and it takes quite a bit of time & effort to measure it. A metrologist might say, for example, “This LiG thermometer has a tolerance of ±0.10 °C. But I did a very careful comparison calibration with an SPRT over the past couple of days, and I am confident to 2s that the uncertainty is ±0.04 °C.” For most things we didn’t bother with measuring uncertainty; we simply compared the device under test (DUT) to a standard that was at least 10X better than the DUT, calculated errors over a temperature range, and then compared each error to the tolerance.

Readability, a.k.a. resolution, is simply the smallest increment or lowest “count” the DUT can report. As an example, a thermocouple readout might have a readability of 1 °C, which means, best case, a reading of 80 °C means the actual temperature is between 79.5 °C and 80.5 °C. (You will still need to account for tolerance on top of this.) It should also be noted that, just because something has a very “fine” readability (e.g. 0.001 °C), it doesn’t necessarily mean it has a low tolerance, low uncertainty, or good repeatability.

Repeatability. Let’s say you have a triple point of water cell, which is exactly 0.01 °C. (FYI, it’s no longer exactly 0.01 °C, but very very close.) If you stick a thermometer into the cell ten times, the thermometer should read the same each time. Any deviations can be statistically analyzed to come up with a repeatability spec. Note that good repeatability doesn’t imply low uncertainty. A thermocouple readout with a readability of 1 °C will probably show “perfect” repeatability in a water triple point cell (i.e. always reading 0 °C), but such a instrument has a large tolerance.

And then there’s systematic errors vs. random errors, drift, etc. I won’t get into these.

For what it’s worth the Bureau International des Poids et Measures , who are the folks who brought us SI units and UTC have the International Vocabulary of Metrology VIM which no doubt updates all the definitions we have been using over the years

Accuracy is not a quantitative measure any more!

https://jcgm.bipm.org/vim/en/2.13.html

I like those definitions, as they are cleaner in that they don’t rely on synonyms to mean different things.

However, I learned it as three terms: accuracy, precision, and resolution.

Accuracy is how close to the actual value the reading is.

Precision is how tightly the readings cluster.

Resolution is how finely you can meaningfully differentiate, i.e. how small a change can you detect.

Example: target shooting with a rifle. For off 10 rounds.

High accuracy means the shots will cluster around the bullseye, but the cluster may be so wide as to resemble a shotgun blast.

High precision means all the bullets go through the same hole, but that hole may be 2 feet to the left.

Resolution means the ring spacing of the target. A 1 inch bullseye with five 1 inch rings will have less resolution than a half inch bullseye with ten half-inch rings.

For the thermometer, accuracy can only be known by calibration (i.e. checking with another instrument), precision is about duplicating the same result, and resolution is determined by the variability of body temp as well a the means of measurement.