Straight Dope Message Board > Main error analysis
 Register FAQ Calendar Mark Forums Read

#1
03-01-2010, 10:13 PM
 rbroome Member Join Date: Jun 2003 Location: Louisiana Posts: 1,754
error analysis

I have a problem where I need a pretty rigorous answer.
I have a series of measurements (doesn't matter what-they are optical measurements of seawater) where the measurements are the same order as the resolution.

Typical values are: 0.004, 0.002, 0.010, 0.005, etc

The issue is that there is little practical difference in differences of .001. However that is the resolution of the instrument. So here I have two measurements 0.002 and 0.003.
So these measurements show a 50% difference which is interpreted as a 50% error. But that doesn't accurately describe the problem since in reality the difference isn't that significant.

How do I describe an error that is the same order as the resolution?

Thanks
#2
03-01-2010, 10:26 PM
 GameHat Guest Join Date: Dec 2007
If you're measuring right at the resolution limits of your equipment, how aren't the largish error margins realistic or meaningful?

My knee-jerk response is to say live with the huge relative error bars or get a more precise instrument.

If your precision is 0.001, and you measure 0.002, well then 0.002 +/- 0.001 is pretty realistic. Which isn't necessarily bad but I think it is realistic.

Last edited by GameHat; 03-01-2010 at 10:27 PM.
#3
03-02-2010, 06:02 AM
 footballisplayedwithyourfeet Guest Join Date: Oct 2008
You should work with standard deviations, given the other values both 0.002 and 0.001 are in the lower spectrum of measurements. It is kind of hard to say anything without knowing what you want to find out/why you are looking at these measurements, but (given normal distribution) you can use the mean and standrard deviations to standardize the distances between points.

Also, whatGameHat said. If your equipment can't be more precise you have to use what it tells you and there is no way of knowing whether 0.003 is really 50% more than 0.002 or not. It could also be more than 50% by the way...
#4
03-02-2010, 07:22 AM
 Crafter_Man Member Join Date: Apr 1999 Location: Ohio Posts: 8,575
A few related questions:

What is the expected measurement range of the seawater?

What is the measurement range of the instrument?

Are you taking measurements to simply see if the values are below a threshold/limit?

Is this instrument calibrated? Is it NIST-traceable? What is the uncertainty of the instrument?
#5
03-02-2010, 10:20 AM
 Bill Door Charter Member Join Date: Nov 2003 Posts: 3,078
Have you run any blank samples? That is, samples of the identical matrix without any of the substance of interest. For an estimate I figure you can't reliabily detect anything less than three times the standard deviation of your blank, and can't quantify anything less than five times the standard deviation.

So you'd run your blank ten times, and get a standard deviation. If you run a sample and get 2.5*N, the only thing you can report is "not detected". If it's 4.0*N, you can report "< 5*N). Anything else and you're blowing smoke.
#6
03-02-2010, 06:00 PM
 md2000 Guest Join Date: Feb 2009
The standard answer from physics lab 101 - the error of an instrument is half the smallest scale reading; so if the instrument measures to the nearest 0.001, the absolute error is +/- 0.0005. The relative error is between +/-0.001/0.0005 and 0.004/0.0005 for each reading.

You can do multiple readings and take the average variation, but the standard lab answer is that the quality of the answer still cannot be better than the margin of error of the instrument.

You want more accurate answers, get a better instrument, don't be reading at the bottom of the scale. You don't use a wooden ruler to measure the width of tiny screws.

The way I explain it to students of the lab is: "If I say I'll give you one or two dollars, if I do this 10 times, how much money will you have?" The answer is about \$10 to \$20. you can't get more accurate than that without statistical evidence. The stats might tell you more about the observer than the readings.
#7
03-02-2010, 06:07 PM
 Bayesian Empirimancer Guest Join Date: Mar 2010
Maybe I can suggest maximum likelihood in a memoryless system, from a Bayesian perspective?
#8
03-02-2010, 07:54 PM
 ultrafilter Guest Join Date: May 2001
Quote:
 Originally Posted by Bayesian Empirimancer Maybe I can suggest maximum likelihood in a memoryless system, from a Bayesian perspective?
What does that have to do with error analysis?
#9
03-02-2010, 09:16 PM
 rbroome Member Join Date: Jun 2003 Location: Louisiana Posts: 1,754
Quote:
 Originally Posted by rbroome I have a problem where I need a pretty rigorous answer. I have a series of measurements (doesn't matter what-they are optical measurements of seawater) where the measurements are the same order as the resolution. Typical values are: 0.004, 0.002, 0.010, 0.005, etc The issue is that there is little practical difference in differences of .001. However that is the resolution of the instrument. So here I have two measurements 0.002 and 0.003. So these measurements show a 50% difference which is interpreted as a 50% error. But that doesn't accurately describe the problem since in reality the difference isn't that significant. How do I describe an error that is the same order as the resolution? Thanks
thanks all.
The data, FWIW, is a set of optical measurements of seawater estimated from satellite images of coastal waters. Values can range from very close to zero all the way up to 1. The range I am interested in are the ones as close to zero as I can find. Hence the question about handling low values.

It is a fun project, the researcher leading the project likes to point out that since we are looking at the portion of the light that entered the water and was reflected all the way back up to the satellite, 99% of the signal is noise to us.
#10
03-02-2010, 09:20 PM
 Bayesian Empirimancer Guest Join Date: Mar 2010
Quote:
 Originally Posted by ultrafilter What does that have to do with error analysis?
Opps, I admit I didn't read the OP as carefully as I should have.
#11
03-03-2010, 10:58 PM
 Crafter_Man Member Join Date: Apr 1999 Location: Ohio Posts: 8,575
Quote:
 Originally Posted by md2000 The standard answer from physics lab 101 - the error of an instrument is half the smallest scale reading; so if the instrument measures to the nearest 0.001, the absolute error is +/- 0.0005. The relative error is between +/-0.001/0.0005 and 0.004/0.0005 for each reading. You can do multiple readings and take the average variation, but the standard lab answer is that the quality of the answer still cannot be better than the margin of error of the instrument.
The uncertainty of each reading will be greater than ±0.0005. If the range is 0-1, I am guessing the 95% confidence 3-sigma uncertainty (student's t) is closer to ±0.05 over a limited range. The only way to know for certain is to perform an uncertainty analysis based on NIST-traceable calibration records.

At any rate, rbroone, you need to have some kind of idea of the uncertainty of the measurement data. If you don't know, then I would assume the values at the low end are pretty worthless as absolute measurements of anything. Most instruments and systems are only traceable from 5% to 100% of full scale. Some are better, like 2% to 100% of full scale. But very few if any are able to produce meaningful data down to 0.1% of full scale, which is where you're measuring. The only thing you might be able to do with the data is make relative measurements, i.e. something that has a value of 0.006 might be twice as large as 0.003. But you would have no idea of what the absolute value would be. Perhaps that's O.K. for you; I don't know.

Last edited by Crafter_Man; 03-03-2010 at 10:59 PM.
#12
03-05-2010, 10:50 PM
 rbroome Member Join Date: Jun 2003 Location: Louisiana Posts: 1,754
Quote:
 Originally Posted by Crafter_Man The uncertainty of each reading will be greater than ±0.0005. If the range is 0-1, I am guessing the 95% confidence 3-sigma uncertainty (student's t) is closer to ±0.05 over a limited range. The only way to know for certain is to perform an uncertainty analysis based on NIST-traceable calibration records. At any rate, rbroone, you need to have some kind of idea of the uncertainty of the measurement data. If you don't know, then I would assume the values at the low end are pretty worthless as absolute measurements of anything. Most instruments and systems are only traceable from 5% to 100% of full scale. Some are better, like 2% to 100% of full scale. But very few if any are able to produce meaningful data down to 0.1% of full scale, which is where you're measuring. The only thing you might be able to do with the data is make relative measurements, i.e. something that has a value of 0.006 might be twice as large as 0.003. But you would have no idea of what the absolute value would be. Perhaps that's O.K. for you; I don't know.
Thanks
the absolute values are important to me. Fortunately we can handle larger errors at the lower end of the range. But I need to understand how to analyze the errors. Your comments are helpful.

 Bookmarks

 Thread Tools Display Modes Linear Mode

 Posting Rules You may not post new threads You may not post replies You may not post attachments You may not edit your posts BB code is On Smilies are On [IMG] code is Off HTML code is Off
 Forum Jump User Control Panel Private Messages Subscriptions Who's Online Search Forums Forums Home Main     About This Message Board     Comments on Cecil's Columns/Staff Reports     Straight Dope Chicago     General Questions     Great Debates     Elections     Cafe Society     The Game Room     In My Humble Opinion (IMHO)     Mundane Pointless Stuff I Must Share (MPSIMS)     Marketplace     The BBQ Pit Side Conversations     The Barn House

All times are GMT -5. The time now is 08:03 AM.