I have a problem where I need a pretty rigorous answer.
I have a series of measurements (doesn’t matter what-they are optical measurements of seawater) where the measurements are the same order as the resolution.
The issue is that there is little practical difference in differences of .001. However that is the resolution of the instrument. So here I have two measurements 0.002 and 0.003.
So these measurements show a 50% difference which is interpreted as a 50% error. But that doesn’t accurately describe the problem since in reality the difference isn’t that significant.
How do I describe an error that is the same order as the resolution?
If you’re measuring right at the resolution limits of your equipment, how aren’t the largish error margins realistic or meaningful?
My knee-jerk response is to say live with the huge relative error bars or get a more precise instrument.
If your precision is 0.001, and you measure 0.002, well then 0.002 +/- 0.001 is pretty realistic. Which isn’t necessarily bad but I think it is realistic.
You should work with standard deviations, given the other values both 0.002 and 0.001 are in the lower spectrum of measurements. It is kind of hard to say anything without knowing what you want to find out/why you are looking at these measurements, but (given normal distribution) you can use the mean and standrard deviations to standardize the distances between points.
Also, whatGameHat said. If your equipment can’t be more precise you have to use what it tells you and there is no way of knowing whether 0.003 is really 50% more than 0.002 or not. It could also be more than 50% by the way…
Have you run any blank samples? That is, samples of the identical matrix without any of the substance of interest. For an estimate I figure you can’t reliabily detect anything less than three times the standard deviation of your blank, and can’t quantify anything less than five times the standard deviation.
So you’d run your blank ten times, and get a standard deviation. If you run a sample and get 2.5N, the only thing you can report is “not detected”. If it’s 4.0N, you can report "< 5*N). Anything else and you’re blowing smoke.
The standard answer from physics lab 101 - the error of an instrument is half the smallest scale reading; so if the instrument measures to the nearest 0.001, the absolute error is +/- 0.0005. The relative error is between +/-0.001/0.0005 and 0.004/0.0005 for each reading.
You can do multiple readings and take the average variation, but the standard lab answer is that the quality of the answer still cannot be better than the margin of error of the instrument.
You want more accurate answers, get a better instrument, don’t be reading at the bottom of the scale. You don’t use a wooden ruler to measure the width of tiny screws.
The way I explain it to students of the lab is: “If I say I’ll give you one or two dollars, if I do this 10 times, how much money will you have?” The answer is about $10 to $20. you can’t get more accurate than that without statistical evidence. The stats might tell you more about the observer than the readings.
thanks all.
I appreciate the comments.
The data, FWIW, is a set of optical measurements of seawater estimated from satellite images of coastal waters. Values can range from very close to zero all the way up to 1. The range I am interested in are the ones as close to zero as I can find. Hence the question about handling low values.
It is a fun project, the researcher leading the project likes to point out that since we are looking at the portion of the light that entered the water and was reflected all the way back up to the satellite, 99% of the signal is noise to us.
The uncertainty of each reading will be greater than ±0.0005. If the range is 0-1, I am guessing the 95% confidence 3-sigma uncertainty (student’s t) is closer to ±0.05 over a limited range. The only way to know for certain is to perform an uncertainty analysis based on NIST-traceable calibration records.
At any rate, rbroone, you need to have some kind of idea of the uncertainty of the measurement data. If you don’t know, then I would assume the values at the low end are pretty worthless as *absolute *measurements of anything. Most instruments and systems are only traceable from 5% to 100% of full scale. Some are better, like 2% to 100% of full scale. But very few if any are able to produce meaningful data down to 0.1% of full scale, which is where you’re measuring. The only thing you *might *be able to do with the data is make *relative *measurements, i.e. something that has a value of 0.006 might be twice as large as 0.003. But you would have no idea of what the absolute value would be. Perhaps that’s O.K. for you; I don’t know.
Thanks
the absolute values are important to me. Fortunately we can handle larger errors at the lower end of the range. But I need to understand how to analyze the errors. Your comments are helpful.