OK, I’ve been fiddling with regression, ANOVA, gauge R & R, and probability distributions, and the only conclusion I’ve come to is that I have no idea what I’m doing.
I have an automated Karl Fischer coulometer scheduled for validation. I’m trying to determine some performance expectations. The device measures moisture content by electrically liberating iodine from a reagent bath, and the iodine is consumed by any moisture in the sample. The endpoint is a return to the initial state, and is detected by a voltmeter; the “titration” measures current x time => charge transferred. Because this is an exact physical relationship, there is no calibration.
The sample is a lyophilized solid, extracted with methanol. The methanol is as nearly anhydrous as we can get. However, every sample handled is subject to absorbing moisture from the atmosphere, as are all standards. All samples are injected from a precision syringe, but are weighed before and after – this weight is used as the sample size in calculations.
Procedure: (1) titrate standard ampoules with certified water content; (2) titrate methanol blanks; (3) titrate control sample (product lot with test history and control chart); (4) titrate samples.
(1) and (3) are performance checks - if the results don’t fall into range, the test is invalid. Tricky part is (2) – this blank value is used to adjust all results from (3) and (4).
Really tricky part: The certified standard contains 100 µg of water per 1 gram sample. For various reasons that don’t matter here, I can’t use less than 0.5 grams of the standard for a validation test - hence, the lower limit of my ability to confirm the instrument’s performance is 50 µg. The blank, the control, and the sample typically contain 35 µg of water per 1 gram sample. This is OK for the sample, because the spec is only that the moisture must be less than the limit – which is 50 µg/g.
What I’m trying to get at is the probability of passing a bad lot, given the variance of the standard measurements I make.
Bonus trickiness: the control chart for the control samples looks horrible – for a lot with mean = 35 µg/g, the standard deviation is about 10% of that. It’s the nature of operating at the limits, but YUK.