Does Consumer Reports use flawed methodology?

AFAIK, Consumer Reports compiles its consumer satisfaction and reliability ratings by way of an annual survey mass mailed to CR magazine subscribers nationwide.

My understanding of research design is that surveys based on self-reportage mass mailings are inherently flawed. No doubt CR is limited by budget, but relying on self-report doesn’t seem the best science and may unintentionally misrepresent the actual experience of consumers. One obvious problem is that response rates for mailed surveys are too low to yield reliable and valid data. Other issues swirl around lack of validating respondents and lack of follow-up. In fact, I’m not sure how representatitive their samples are given the comparatively tiny circulation of CR magazine. Remember: only a fraction of CR readers actually subscribe to it.

What am I missing here?

(Note: CRs testing methodology is an entirely different issue.)

Well, you’re missing IMHO. :wink:
But how do know these types of survey, as used by CU, are flawed? Their readers don’t reflect the population in general, but people who have a stake in the results.
Peace,
mangeorge

You’re not really missing anything, and neither is Consumer Reports. They don’t claim their satisfaction lists are scientifically accurate, statistically valid or anything other than the results of their readers who filled out and returned the surveys. If you look at the annual auto report, you’ll see lots of models where the entire column is stamped “Insufficient Data.”

This may lead to a certain amount of subjectivity in their product reviews. Perhaps they downgrade a model because readers have been dissatisifed with it, when a true random sample of all owners would cause it to be rated higher. But that’s what J.D. Powers is for. I don’t see Car and Driver or Road and Track as being better indicators than CR.

From my reading Consumer Reports carefully separates out the results of its internal testing and the subjective evaluations of its reader surveys. The latter are used almost exclusively for customer satisfaction responses, as they do when they rate service at hotels or restaurants, or reliability of cars or other durables.

Consumer Reports’ circulation is 4,000,000, which does give them a very large base, and they have an almost fanatical culture so that response rates are extremely high. They get hundreds of thousands of responses on any subject, which would be prohibitively high to acquire in any kind of random survey. They also have minimum response thresholds for each study, so that they don’t give out results if they don’t get enough people responding on any one piece of it.

So in one sense the OP is quite right to say that these are self-selected respondents and may not be representative of all consumer opinion. On the other hand, CR doesn’t ever say that they are. They are upfront that these are the collective opinions and experience of people who are like the readers of the results. The magazine is frankly oriented to middle-class families and values. If you are not part of this group you have to understand your needs may not be well represented. But then you’re probably not reading CR in the first place.

I remember a long time ago (~15 years), CR rated mountain bikes. Being a cycling enthusiast, I eagerly dug into the article… I noticed a number of highend bikes were deemed unacceptable…becuase they rejected any bikes with front brakes that could flip the rider! Any mountain bike that can’t lock the front wheel is totally unacceptable for any semi-experienced rider. Also, this is often dependant on the mechanic who buolt the bike and tuned the brakes.
I haven’t read CR since and I haven’t looked back once.

In some respects, having self-selecting respondents isn’t really a problem. Who’s most likely to respond to a survey? As a general rule, it’s people who are unhappy with a product. So, the CR satisfaction rates might be understated. However, is it more likely that people who’ve had trouble with a GE washer will respond to the survey than people who’ve had trouble with a Whirlpool washer? Probably not. So, if the results indicate that GE washers are less reliable than Whirlpool washers, that conclusion is probably still valid, whether the survey respondents are self-selected or not.

The problem with CR is that they’re good at things they focus on like cars and home appliances, but when they start wandering farther afield and judging stuff like higher end digital cameras, notebook PCs, fishing reels and a variety of other specialized stuff some of their evaluations are (IMO) way off the mark./

They’re starting to rate prescription drugs and medical treatments now btw :eek:

Actually I don’t have a problem with that - it’s better than the advice of the pharmaceutical companies and their advertisements.

The flaw is CR’s product surveys hits all products equally. It may be true that consumers who had trouble with their cars and washing machines are more likely to respond, but you still get a pretty good idea which brands are more reliable.

Just like I said, in other words.

The toaster testers are no good at higher end cars too. As mentioned about bikes above, the criteria is not always structured for enthusiasts

I would argue that criteria are almost never structured for enthusiasts; CR tests/reports are for the casual consumer who is somewhat ignorant about the item in question and needs advice on a baseline buy. If you are an enthusiast, you probably already have some ideas about what you are looking for and read the reviews on the item in the enthusiast magazine. If you are an enthusiast, your needs are probably different than Joe Middle Class as well - take cameras, for instance: Joe M.C. wants a camera that will automatically take clear pictures of Joe Jr. without getting a lens cap or a thumb in the picture, Mr Pro Foto is worried about speeds and manual settings that inb the hands of Joe M.C. would result in pretty blobs of color on film.

But is the owner of a Bosch washing machine as likely to respond to CR’s survey as someone who owns a GE model?

Ultimately, I now understand that CR doesn’t claim scientifically valid survey results re: consumer satisfaction. That said, I don’t see this clearly explained on each and every consumer report they issue. Sure, it might be buried somewhere in fineprint or implied, but many consumers think CR the gospel. I’ll also note that some companies with favorable ratings cite their products’ high ratings “in a leading consumer magazine.”

Thanks for the responses. Not sure if they are representative of the SDMB, at large. :wink:

I can’t imagine why he wouldn’t be, unless one can theorize that Bosch owners are temperamentally unlikely to respond to surveys.

Just guessing, of course, but I’d say that the owner of the Bosch machine who has had a problem with that machine is just as likely to respond as the owner of the GE machine who has had a problem with that machine. Similarly, the owner of the Bosch machine who has not had a problem is just as likely to respond as the owner of the GE machine who has not had a problem.

So, when CR reports that 10% of respondents had a problem with a Bosch machine, but 20% of respondents had a problem with a GE machine, I’m reasonably confident that the GE machine is less reliable. What you can’t conclude from those numbers is that, if you buy a machine from GE, you’ve got a 20% chance of having a problem with it. Because of the self-selecting nature of the survey, the real figure is probably much lower (a WAG, I grant you!).

Though I think CR is a fine reference, as far as it goes, and particularly appreciate that they explain their methodology, there is a small fly in the ointment of the survey. While it may be true that unhappy buyers of a GE washer are probably roughly as likely as to respond as unhappy buyers of Bosch washer – and I’ll probably have thought of a counterexample by the time I finish the next paragraph or two-- there have been demonstrably successful campaigns to recruit customer enthusuiasm over customer product satisfaction.

Saturn, for example, hit a bump in customer satisfaction in its early years, after an initially promising start. The major part of their immediate campaign was to engage the enthusiasm of Saturn owners. While they’d always sought to encourage word of mouth, they now added a new project budget to redouble their efforts to create a sense of belonging to “the Saturn family”. In some areas, owners recruited with an almost cult-like zeal, which, when you questioned them on details, seemed somewhat unrealistic. (Saturn did fix its issues and growing pains, but it took a year or two for those structural changes to be fully implemented. I’m talking about a resurgence in zeal when objective measures were actually down.)

Similarly, unhappy buyers (I knew I’d think of a counter example for this) of expensive products with a certain cachet may be reluctant to complain. They have a vested interest in having the cachet of their expensive purchase remain high, and on a deeper level, there can be a significant “cognitive dissonance”. If you stretch to spend 100K on a car because of an ad campaign that targets the “can barely afford it” market, you may feel obligated to justify the purchase to yourself, even if there are problems.

That said, I’ve always considered surveys to be among the less reliable “objective” data in general use, so what can you reasonably expect? Weighed for what it is, it probably conveys what data it can, with reasonable accuracy, within the categories (e.g. the 1-5 scale) that CR often uses. I trust CR to attempt to eliminate or report systematic biases as they become aware of them. YMMV

Good counter-example. I’ve run into that phenomenon on occasion (“Yeah, my X is a great car - my mechanic says it’s really well-built.” Response: “If it’s such a great car, how come you’ve become chummy with a mechanic?”)

That sort of is the bottom line, isn’t it? CR’s methodology is subject to some inaccuracy, a kind of roughness around the edges, but it’s tough to propose a good alternative.