Suppose you have a choice in making a product, you have process A, which yields products which are slightly out of specification, but have an extremely tight distribution (very small standard deviation). So you have a very uniform product, but out a bit.
Process B yields a mean which is within specification, but with very large variability. The variability is such that lots of prout is out of spcification by a large amount).
From the statistics POV, which is the better process?
I would say “A”-because you can probably move the mean to get it within the specification limit.
What is the correct answe?
How bad is it to be a tiny bit out of specification?
If you are making sock puppets, then if all the product is out of spec, making the socks all just a hair snugger on the hand than preferred, then no big deal - they’re all still salable. Having the socks vary wildy in size would be much worse, resulting in volumes of unsalable product.
If you are making blue lasers, and being a hair out of spec means utter failure and worthlessness, then producing a pile of very similar lasers which are all consistently worthless is a worst-case scenario, whereas if you are producing widely different ones of which 20% are salable and the rest are dramatically bad, then that’s the better option because you can test for the good ones and at least sell the 20%.
If you can move A to get the mean within the specifications limit, that’s an entirely different situation.
This was my first thought as well, but I think this should already be taken into account by the specification. Things in-spec are usable and things out-of-spec are waste (or require re-work). In the case of sock puppets, you could give the manufacturer tighter specs so that your sock puppets fit more nicely, but this is going to cost someone more.
Assuming my definition above for in-spec, I see four questions:
- What is the failure rate?
- How much does the waste cost?
- How tunable is the process? Can a process that precisely makes incorrect widgets be adjusted to within tolerance?
- How soon does the process need to be ready? Can the process be tuned in time?
Precision would always trump accuracy provided the process could be tuned enough and on-time (#3 and #4). If not, you have to balance the questions above – how much do we slip our time frame tuning the process vs. how many bad widgets are we going to create?
There’s no correct answer without knowing what the impact of being out of spec is. Sock puppets? No big deal - in fact, you can’t really say they’re out of spec. Machined parts for an assault rifle? You have no room for error; out of spec by a thousandth, and they’re a complete waste of money.
If the customer will accept no products out of specification, Process A will bankrupt your company. If it could be tuned to bring the products into spec then it should be tuned after the first off. Otherwise, what’s the point of making products the customer won’t accept?
In general, when trying to improve the quality of a process, you need to do the following steps:
- Define the upper and low spec limits for your various critical-to-quality specifications.
- Measure what you’ve currently got. If the process is out of control (i.e. random variation outside the spec limits), then:
2a. Get it under control so the process is repeatable. - Once the process is under control, improve it by eliminating causes of variance.
- Develop a control plan to keep the process in control.
If your process is not under control (i.e. there’s high, random variance in quality), it’s extremely difficult to improve. The first step is to analyze the problem and first get the process to be repeatably within spec limits, even if they are higher than you’d like.
Of course, it’s more complicated than that, even. It might be that products which are off-spec are still sellable, but at a reduced price. This is true of computer processors, for instance: A chip company will have one production line of chips specced to be able to run at, say, 2.1 GHz. As the chips come off the line, they’ll be inspected. The chips that are on spec will then be put in boxes labeled 2.1 GHz and sold for a high price, while the ones that are a little bit off will still be usable, just at lower speeds, so they’ll be put in boxes labeled 2.0 or 1.9 GHz and sold for lower prices. Even if all of the chips were exactly consistent, it’d still be to the chip company’s benefit to segregate the product into different price points this way (possibly with some features deliberately crippled on the cheaper ones to maintain the incentive), so they really don’t lose anything by having a slightly inconsistent product.
That’s sayin’ a mouthful, of course.
I visit places where they don’t know how to measure things in the first place.