S.S. is applicable to a call center, to be sure, but typically not at the end-user experience level (John Q. Public). Your incoming call rate, handle rate, close rate (for sales, if you do that sort of thing), and all those other bits are VERY suited for statistical analysis, and I won’t argue that - hell, that’s my JOB.
The problem is again one of application rather than of principle. S.S. is designed to refine processes and reduce an error rate. For an IVR (the computer voice which walks you through options when you call in), S.S. or a process like it is indispensable. But for agent-customer interaction, eh, not so much.
The reason for this is because most agent-customer interactions are not situations that lend themselves well to measuring, except in the most basic sense. For example, a good call center has programmed their IVR to handle all the routine, everyday tasks that make up the bulk of their incoming calls. This makes good, logical sense - you don’t have to pay people to take the calls that the computer can handle, and the computer can handle many many more calls at one time than your poor agents can. Plus, it will never piss off the customer with speech impediments, poor word choice, chewing gum on the call, burping, forgetting to put mute on, badmouting the customer when the agent thinks mute IS on, and so on. The problems it does have are repeated problems that can be ironed out, by applying S.S. (or again, other) methods.
In contrast, your agents will do all of those no-nos and then some, and the customer is pretty much guaranteed to be a non-standard problem. The standard problems that slip through the IVR (in a well-run center) are insignificant when compared to the weird ones that the IVR can’t handle and forwards on to an agent. (Side note: ideally, you’d apply S.S. to your output, whatever it is, and reduce the number of weird problems that can arise – this is known as Root Cause Analysis – but even my rosy world ain’t that rose-colored.)
Because you can’t really quantify the myriad ways for an agent to screw up, nor the innumerable causes for the call in the first place, your best bet lies in applying S.S. principles internally, to standardize your practices and figure out the best way to handle the weirdness coming down the line. This usually boils down to training methods, QA and feedback controls, operational decisions such as whether its worth keeping agents on during slow times, and so on. All of that lends itself well to statistical analysis, if you impose stringent enough controls. The interactions between agent and customer do not, except in the most basic sense (such as how long a call lasts, which is known as Average Handle Time, and which is the STUPIDEST method I know to rate an agents performance unless their only job is to do things your IVR should be capable of doing).
Unfortunately, the agent-customer interaction is the most visible, and thus gets most of the attention. It’s important, sure, but it will get better if you get better in other areas, and – this is the important bit – it typically does not affect your bottom line nearly as much as the vastly more numerous interactions your automated (and thus quantifiable) systems do.
Simply put, if S.S. would be used where it would do the most good, I’d have no objections to it. Unfortunately, what I see is everyone trying to make the most visible improvement, rather than the most effective. And the typical S.S. consultant either doesn’t know what they’re doing re: the call center industry, or is deliberately trying to fatten his/her own bottom line.