Do You Belive In Consumer Reports Product Ratings?

I do trust CR-they seem to do pretty good testing of consumer products. But I am continually amazed by the huge disparity in ratings for some products. Take washing machines-we were looking to buy a new one, so I went to CR for info. I noticed that some very expensive brands did not rate as highly as some less expensive ones…for example, one Bosch machine priced at over $1000 was not rated as highly as a machine costing less than $500.
Do you believe this? How could there be such a disparity in the ratings.
I would think that a machine costing so much more would have better quality components, and be sturdier. Anyway, are a lot of high priced models simply the same machines (as the lower priced models), with some added features?

Believe in 'em? Hell, I’ve seen 'em!

The price may have to do with longevity, or ability of a appliance to be repaired even at excessive cost. Something like yes you can spend 2x as much and it may last longer but the repair bill from the Maytag may will more then make up for it.

CR has been testing stuff since the 1950’s. I’m impressed that they don’t let their ratings get used in advertising. Buy Maytag rated #1 by CR They’ve gone to court to stop that nonsense.

Nothings perfect. There are times that a manufacturers model x will get a high rating. But model y won’t be as good. Same manufacturer and the machines looks identical. CR can only say that the model they tested did well.

I believe in their integrity. I do not always agree with either their testing regimen, their rating criteria, or their results. It all depends - I’m a lot more leery of a lot of their electronics reviews for example, because I’m not always sure they’re concerned with the same things I am nor testing them like I would.

But in general I think they are a solid enough starting reference for purchasing research.

I think in general they mean well, but I think they’re biased in that they see their goal as helping people see through the hype and advertising etc. So they’re biased against heavily hyped/advertised products.

I also think it’s weird that they sometimes rate things that anyone could easily test for themselves, e.g. the taste of various types of OJ and the like. In such cases, I am even more suspicious that their goal is to take down the most advertised brand and highlight a cheaper one.

That, and Tamerlane’s post.

I believe in their integrity as well. Often - the reviews will contain fairly detailed info on their tests. Usually testing is done blind - they go so far as to cover logos on cars for the drivers testing them (although I’m sure most professional drivers don’t need logos to tell which car it is).

The problem I have - is often they don’t take into account the same things I do. Styling is almost never used in their ratings - while to me that makes a difference. Of course, that is more subjective. I’m trying to think of some examples, but sometimes - I am not sure their testing actually replicates real world use of the products.

But whatever their problems, I don’t think it has to do with an inherent bias or corruptness. Usually they have details on their testing - and subrating ratings for different aspects of individual products. Many manufacturer get by on advertising - which is why I can easily see a washing machine costing 50% as much out performing the other one.

Also there is a difference between well made - and reliable. For example - German cars - last time I checked - Had a fairly low reliability rating from CU, but I think many would agree that their fit & finish as well as other factors are superior to Japanese cars for example.

I think if you take them with a grain of salt - you an find value there. I’ve been a subscriber for some time. I often will use their reviews when buying something, but sometimes substitute my own judgement when I feel it hasn’t captured the Essence of what I am looking for. But I have no doubt if they say detergent x is better than detergent y at getting out red wine - that they have both run that test - and found that to be true.

I agree with Tamerline.

Here’s a story of my own. Years ago went shopping for a dishwasher. Universal Waste King at the time was rated highest. One of the reasons was that a child could stand and bounce on the door and it would not break. Not my idea of a rating feature. No kid of mine would even consider standing on the dishwasher door. Why would I consider it a feature?

This is my opinion also. They may rate a camera higher than another for reasons that don’t matter to me. Same with inkjet printers. They might complain abiut ink costs, but I’m willing to pay the price for better printer drivers with more controls.

Count one more point for trust in their integrity, but don’t necessarily care about their criteria, and think sometimes their rating formulas aren’t as useful as they could be.

For example, if you look up dishwashers, they rate things like water and power usage very highly- equally with “does it clean dishes well”, which seems really bizarre to me considering that the damn thing is called a DISHWASHER, and the total water and power usage probably cost some fairly small amount of money.

I’ve had much better luck on appliances going to Home Depot or Lowe’s and finding appliance models with high numbers of high customer reviews.

I trust them; I got my car because it was rated a best buy.

They are looking for the total package – how well it works and total cost of ownership. And they are very clear about what criteria they use (for instance, they stress road noise far too much when rating cars).

Do you have any evidence at all for this? You are accusing them of fraud, given that they say they do their tests double blind where possible. I find it much easier to believe that some brands hype their mediocre products beyond their worth than that a neutral lab deliberately falsifies its test results in order to fulfil some sort of pointless incomprehensible policy goal.

I’ll tell you why. Our last old faithful, highly effective dishwasher had it’s door bent down when my older son accidentally stepped on it. It couldn’t be fixed because the parts are no longer obtainable. We bought a new one. Our sons were roughhousing in the kitchen when the older son tripped backward and fell on the door of the new dishwasher. It seems to be OK.

You may have better behaved children than I do, but I don’t think CR were testing a useless feature.

I imagine that “where possible” will be very rare, for experienced product testers.

Me too.

I trust them for stuff I have no clue about how to rate or have any significant personal market feedback on. Washers, dryers, car tires, laundry detergent, power washers, battery chargers etc etc. Their recommendations might not be perfect but they have more info than I do.

Regarding stuff I do know a lot about their ratings sometimes seem to be somewhat crude and oddly arbitrary, but the rarely recommend* bad *stuff, they just don’t seem to know why the really good stuff is really good. And admittedly personal biases re aesthetics, ergonomic preferences etc. play a role here.

Re PCs and other complex electronics related stuff one big problem is that these items have a very fast life cycle and by the time they publish the rated device is often close to being replaced or discontinued.

How so? Double blind testing is child’s play with orange juice. It’s child’s play with ranking swatches of washed cloth or dishes.

I agree with OJ, although again, I don’t see the whole point of testing OJ to begin with. But with dishwashers and other appliances, much of what they’re testing is not the outcome but the features and so on. You can’t do that without recognizing the machine.

So you think they falsify their list of features? Well that should be easily researchable. Do you have a list of examples of them inaccurately listing features (outside of the odd mistake)?

I don’t think they falsify anything, but how they rank the importance of one feature over another is highly subjective, as others have noted.

So you think they decide how they will rank features after they have checked what features the hyped articles have, and rank them down? Any evidence?

Frankly I think you are just blowing smoke, all the more since you started out saying:

Emphasis added. Now you admit that OJ testing could be done double blind, but earlier, you were for some unspecified reason especially suspicious of OJ testing. And now you are talking about features being an area of bias, even though OJ doesn’t have features.

Further, even if you are correct, this wouldn’t overly worry me since they always list the features and I can make my own judgment in that respect. It’s the testing information that is valuable.

Not really. It’s the impression I get from reading a lot of CR articles. But there’s a smidgeon of evidence from their ratings of things that really shouldn’t be rated altogether, as above.