In distributing a product a poll is taken to see if the product was delivered. The population is 550,000, the poll sample was 300. There is no “mean” as this is a one-time poll taken about a one-time distribution - however, our “score” was 94.7% delivered and our bonus depends upon our achieving a 95%.
My problem is that the verification company is saying that their sample size viz their population corresponds to a 95% confidence level and a 2.6% margin of error… and I just don’t see that. But perhaps it’s my source… after all, I’m not a statistician.
This tells me that I need to call 1,400+ people if I want a 95%/2.6% CL/ME, as claimed by the verification company.
Of course, we do our own audits, one far more detailed than the verification company: to call the same job, we contacted 21,000 people, not 300. Of course, our survey is for distribution quality, not just a sample from a population, so it must be far more detailed. Plugging in the numbers in the above calculator shows me that our CL is 99% and our ME is .87%.
Anyway, my question is: Is it possible, on a population of 550,000 and a sample size of 300, assuming a standard response distribution, to achieve a confidence level of 95% with a margin of error of just 2.6%? Is there some higher-order statistics that’s being done here… or are we being cheated?