Six Sigma's rubbish. Isn't it?

S.S. is applicable to a call center, to be sure, but typically not at the end-user experience level (John Q. Public). Your incoming call rate, handle rate, close rate (for sales, if you do that sort of thing), and all those other bits are VERY suited for statistical analysis, and I won’t argue that - hell, that’s my JOB. :smiley:

The problem is again one of application rather than of principle. S.S. is designed to refine processes and reduce an error rate. For an IVR (the computer voice which walks you through options when you call in), S.S. or a process like it is indispensable. But for agent-customer interaction, eh, not so much.

The reason for this is because most agent-customer interactions are not situations that lend themselves well to measuring, except in the most basic sense. For example, a good call center has programmed their IVR to handle all the routine, everyday tasks that make up the bulk of their incoming calls. This makes good, logical sense - you don’t have to pay people to take the calls that the computer can handle, and the computer can handle many many more calls at one time than your poor agents can. Plus, it will never piss off the customer with speech impediments, poor word choice, chewing gum on the call, burping, forgetting to put mute on, badmouting the customer when the agent thinks mute IS on, and so on. The problems it does have are repeated problems that can be ironed out, by applying S.S. (or again, other) methods.

In contrast, your agents will do all of those no-nos and then some, and the customer is pretty much guaranteed to be a non-standard problem. The standard problems that slip through the IVR (in a well-run center) are insignificant when compared to the weird ones that the IVR can’t handle and forwards on to an agent. (Side note: ideally, you’d apply S.S. to your output, whatever it is, and reduce the number of weird problems that can arise – this is known as Root Cause Analysis – but even my rosy world ain’t that rose-colored.)

Because you can’t really quantify the myriad ways for an agent to screw up, nor the innumerable causes for the call in the first place, your best bet lies in applying S.S. principles internally, to standardize your practices and figure out the best way to handle the weirdness coming down the line. This usually boils down to training methods, QA and feedback controls, operational decisions such as whether its worth keeping agents on during slow times, and so on. All of that lends itself well to statistical analysis, if you impose stringent enough controls. The interactions between agent and customer do not, except in the most basic sense (such as how long a call lasts, which is known as Average Handle Time, and which is the STUPIDEST method I know to rate an agents performance unless their only job is to do things your IVR should be capable of doing).

Unfortunately, the agent-customer interaction is the most visible, and thus gets most of the attention. It’s important, sure, but it will get better if you get better in other areas, and – this is the important bit – it typically does not affect your bottom line nearly as much as the vastly more numerous interactions your automated (and thus quantifiable) systems do.

Simply put, if S.S. would be used where it would do the most good, I’d have no objections to it. Unfortunately, what I see is everyone trying to make the most visible improvement, rather than the most effective. And the typical S.S. consultant either doesn’t know what they’re doing re: the call center industry, or is deliberately trying to fatten his/her own bottom line.

True, but neither can they be considered as equivalent targets of Six Sigma opportunity. Shades of grey, my friend. Be like water :slight_smile:

The more standardized and repetitive the “white collar” group is, then the more potential analysis benefit can be realized. In my experience, classic beaurocratic office environments (e.g. Dept of Motor Vehicles, Insurance Claims Processing, etc) are ripe fruits for Six Sigma picking, and surprisingly huge efficiency gains can be realized. SS tools are great at exposing wasteful processes.

Non-routine and highly variable environments, alternatively, should view Six Sigma with a healthy dose of skepticism and even fear.

Just out of curiosity, how would you apply Six Sigma to a university? I used to be the editor at a university’s student newspaper, and one of the managers of our satellite campuses was trying to push Six Sigma to improve the university’s finances and student retention. Would it make any sense for that on the classroom and faculty level? I can see where it might (theoretically) help with a university’s bureaucracy, which rivals the worst DMVs. But it’s hard to quantify the “output” of professors, and I’m sure if you tried they’d resent the hell out of it.

The company I worked for attempted to implement such a plan. Unfortunately, upper management was a bunch of good old boys who did not have any clue about the core technology upon which the company was based.

We built magnetic recording heads for hard disk drives. The company was started by an electronics genius who wound wires around ferrite cores resulting in magnetic recording heads. I believe we manufactured part of the data recording device on one of the first Mars (Moon?) landers.

Our core expertise was exploiting young girls in third world nations who would hand wind magnetic recording heads. After they went blind or got carpal tunnel syndrome, they would go off and get married and the next 17 year old would come and take their place.

When labor got too expensive, we’d pick up operations and move to the next emerging cheap labor supplier nation.

Times and technology changed. Magnetic recording became more of an integrated circuit manufacturing type of technology. One of the biggest problems was that traditional test metrics no longer correlated well with product quality. If you can’t predict the quality of your product and process, six sigma quality control will do little to enhance the quality of your product.

Management brought in these experts, who forced everyone to sit through 6 weeks of courses. Assemblers without a high school education were sat next to PhDs in Electrical Engineering and lectured on statistics. The results were predictable and ineffective.

After one week, I managed to convince my boss not to force me to go. Actually, I just kept putting off going to the next round of classes. My job was to explore new test metrics for magnetic recording, which was actually pretty relevant to total quality control. Personally, I believe that the information included in the classes could have been communicated in a one day seminar.

After we paid the consultants millions of dollars, our company went out of business in a couple more years. I think I could have predicted it.

Under the right circumstances with proper management, six sigma methodologies can work quite well. I suspect that Toyota and Intel have both successfully implemented such programs.

Wow. We almost certainly know each other. I worked there from '90-'98. I totally agree with your assessment too.

Everybody in Santa Barbara is separated by no more than two degrees. It’s a pretty incestuous town, frighteningly so.

Yours is the perfect example of why people are leery of SS, and why it so often lands on organizations with a damp clunk. It should be implemented seriously only in those areas where appropriate, but not pushed blindly at all departments. Unfortunately, it usually gets pitched to the upper management as a blanket panacea. Management usually doesn’t receive enough training or guidance to make the informed decisions about how to apply the tools.

End result is your poor lab professor is going to be asked to “develop and present a quarterly vision for buzzword buzzword buzzword” and justifiably feel that his/her important time is being wasted by all of the SS hoops to jump through.

I’m responsible for a chemical manufacturing process that’s so noisy that I’m all happy when I do a regression and get an r² > 10%. So, yeah, it’s a bit painful doing Six Sigma on it. On the other hand, I’m pretty much assured of perpetual employment, 'cause there’ll always be more variation to stifle.

If I had to do it, I wouldn’t start with the most difficult problem to be tackled (i.e., the one requiring you to quantify the “output” - if any - of professors). And I’m always leery of people who want to use Six Sigma to improve finances; there’re other tools for that.

Student retention is probably one that can be tackled, although it’d require a fair bit of work. First up, let’s assume that you’ve already clearly defined what you’d be measuring for student retention, and have agreed on what the goal is (not always a safe assumption).

I’d start by “data mining” – finding all the data you have on student retention, and sorting it every which way you can. Find out if certain majors have better retention, if retention decreases based on which year the student is in collete, or by age, gender, etc. How variable is the distribution of student retention measures? Was it more variable some years than others? Basically, you’d look to come up with some testable hypotheses without putting too much effort into measuring things, yet.

Next, I’d start testing those hypotheses. In my job, that usually involves designed experiments (DOE). For student retention, It’d probably involve a bit more focused data analysis – student questionnaires, and such. It might be difficult to get reasons why students dropped out; maybe you’d need to have students answer why they didn’t drop out when they register each semester. It’s a bit difficult on these types of problems, figuring out how to collect and record the data – e.g., if someone tells you they dropped out because they’re “too poor”, does that count as a “need more student financial aid” or “tuition costs too high”? For a first-pass at improving student retention, I would expect that this step would basically be some type of Pareto analysis.

Once you’ve got your hypothesis testing done, you’ll hopefully have some idea of what the big causes of variation are. Maybe the most students did’nt re-enroll because the campus is next-door to a cabbage rendering plant, which stinks. So you’d need to fix the campus-stink problem to improve your student retention. And you’d need to make sure that the stink doesn’t come back, once you’ve fixed it.

From the Wikipedia article:

The entire point to the exercise is to actually MEASURE what you’re doing.

I know, that doesn’t sound like a big deal. It’s common sense, you think. But in fact, most businesses really have no fricking clue what processes they have that are or aren’t inefficient, or how inefficient they are, or where their problems really are.

Disclaimer: I’m an ISO 9001 auditor, part of the time anyway, and so I see a LOT of businesses and how they try to fix problems. The great, great majority of businesses out there, at least that I visit, aren’t measuring jack squat. Last week I had this conversation with the owner of a business:

ME: I see that quite a lot of your projects go past the due date.
HIM: Yes, that’s a problem.
ME: What percentage of projects are past the due date?
HIM: I’m not sure.
ME: You don’t know? Wow. Well, let’s find out.
(Scramble for numbers)
HIM: I think it’s about eighty percent.
ME: Okay. Why?
HIM: Why what?
ME: Well, why are so many of them late?
HIM: Huh?
ME: You know, what’s the reason they’re late?
HIM: Well, this and that, you know. I’m not sure exactly which ones happen the most.
ME: So who sets these due dates you never meet?
HIM: the project managers.
ME: Based on what?
HIM: Hrr?
ME: You know, where do they come up with the due dates?
HIM: Well, they guess when they might get it done.

Honestly, I see this every week. You’d all be utterly shocked at how many businesses really, truly don’t know the most basic things about their own business because they don’t measure them. And many times they aren’t as honest as that guy; they THINK they know what’s going on, and when they finally start measuring it, they’re flabbergasted to discover they don’t.

Where are the defects happening? Why?

Those questions ain’t as easy to answer as you might think.

I’d agree that a lot about the Six Sigma methodology in particular is silly. The whole “black belt” thing is stupid. But the central idea - actually measuring what’s going on - is very smart, and is practised by fat too FEW businesses.

What confused me is the apparent underlying assumption that the SS guy can answer those questions, while the people who actually DO those jobs every day, can’t. It never made sense to me that someone trained in essentially entry level statistics is needed to tell someone trained in making widgets how to…make…widgets. I suppose now I understand that the more typical approach is to simply train the widget maker in entry level statistics, which makes sense.

It can be notoriously hard to figure out defect causes in a factory, and sometimes the widget makers themselves are the worst at it, because they are biased. We studied one example in class where a Six Sigma blackbelt was called in to figure out why the defect rate in a certain plant was so high. The plant operators themselves swore up and down that there was nothing they could do because they had personally run statistical samples on every machine, had a system in place to randomly test parts for tolerances, they had done tolerance tests on every machine and all were in spec, etc.

In the end, it turned out to be some second-order interaction between the various machines that was causing the problem, and the black belt figured it out. Each individual machine was within tolerances, but there was something about the way they interacted that was causing problems. The black belt re-organized the assembly line, changed the process slightly, and the defect rate plummeted.

Six Sigma is really just another way to give you a formal process to run through that makes you exercise your brain. It puts rigor and numbers to analysis that otherwise might be done with intuition or ad-hoc analysis that leaves gaps in understanding. It like a checklist for pilots - it’s just formalizing the steps you should be taking anyway, but putting it on a checklist means it’s less likely that you’re going to miss a step or make an invalid assumption about what needs doing.

I don’t understand why you think those necessarily have to be mutually exclusive groups.

But even if you do bring in a consultant, so what? People specialize; that’s how human beings organize themselves to create wealth. You don’t hear people say “gosh, why bother hiring accountants?”

You don’t necessarily have to buy into Six Sigma as a specific SPC program, but trust me, statistical analysis works. The people making the widgets DON’T necessarily just know what’s wrong and what’s right at the widget factory until they systematically measure it. Humans can’t just sit in the middle of an operation like that, see a millions things going on, and come away with a clear, unbiased, numerically accurate picture of the truth.

I’m a certified Black Belt - though I haven’t done that type of work in a few years.

Its often misapplied. Works great in high quantity manufacturing operations. Less applicable tied to business operations - though I’ve used it myself for call center analysis.

Its really just a repackaging of a lot of existing tools into a metholdogy. A little Juran, a little Deming - shaken not stirred - and in things like FMEA (Failure Mode Effect Analysis).

There are a few big issues with Six Sigma - people try to solve problems too big for the methodogy, people try to solve problems to small for it (Nike projects - “Just Do It”), people don’t know the stats and misapply it, people don’t get what its used for and misapply it. I’ve seen all the above. Companies often start a Six Sigma program and look for the people sitting around doing nothing to turn into Black Belts - which is completely wrong - you need people who “get” analysis, who like nothing better than to prove something with data, and who aren’t closed minded when they prove something they didn’t set out to prove.

But properly applied to the right problem - and often combined nowadays with Lean Manufacturing - its been an incredible boon to manufacturing. And I think can be applied outside it successful when you understand what you are doing. The process analysis techniques and FMEA type stuff, keeping process in control.

But its a toolbox. You pick and choose your tools.

And that terminology really pisses off those of us who are genuine Black Belts.

The two guys who founded it - one was a Judo Black Belt - the other guy was really into Star Wars. The guy who was into Judo won the naming - otherwise I guess I’d be a Jedi Master - which is sort of geeky lame.

Because there is no widget maker per se, in an actual manufacturing plant.

You got grunts on the floor who watch pallet lines going around in circles adding the appropriate pieces , and they get to hear managements newest and greatest thing since sliced cheese.

Then you have your lead hands which are the point guys doing change overs and first responders when one of the lines goes down,or they have to poach people from one line , to cover another line when someone in the office starts crying the sky is falling , we need this part now to make shipment.

Next up are supervisors who basically assign personal to various lines and juggle scheduling and various other duties.

Then you have QA reps , third party certified auditors, contractors working on various lines an on an on.

Even with a management policy of transparency in communicating the big picture to us grunts , not every one is aware at any given time what the big picture is.

A normal factory is a team effort with different departments clashing , colliding and co-existing all at the same time.

The right consultant doing some simple observation can usually spot something that either does not look right, or is an ergonomic night mare thats causing problems cause its always been done that way.

Declan

Well, it bugged me a little at first, but I got used to it. But it is a little strange to hear engineers steal a title like that from another discipline, considering how militant engineers are about keeping their own titles pure.

We’ve also used it to “prove” the guys on the floor were right. Sometimes the Black Belt gets in and starts talking to people and a whole bunch of guys say “its this part right here, we’ve been telling the production manager, but he thinks this supplier is really good and it must be something we are doing.” So you stick that into the set of things you are looking at, as well as “is it something that we are doing” - you do your testing - and the guys on the floor are right. The production manager, now confronted with hard numbers (and visibility - Six Sigma is really good for adding visibility) comes around and you get a new supplier.

C’mon, Sam, every single engineer in this thread (and I will add my voice to the chorus) thinks that the title of Black Belt for a stats geek is lame. Don’t blame engineers for something that a couple of guys somewhere made up.

Did you know that they now have a “Super Black Belt” title?