I gave a five-star review to a place that has Cafetería right in the name. There was a 1-star review which said “this place was more like a cafeteria than a restaurant”… hmmmm-k then?
Which reviews I look at depends on how many there are; I don’t give them more or less credit based on the number of stars but based on whether the writer can string a sentence and seems to know what are they talking about. Downgrading a restaurant because they didn’t have allergens information available - thank you, with my allergies I’d rather avoid it. Downgrading a Japanese restaurant because a lot of dishes had soy - if you wanted noodles and no soy, try an Italian next time.
[QUOTE=Staggerlee]
And my favourite responses to 1 star reviews on there are from the Balmer Lawn Hotel in Brockenhurst, which rather than apologising, accuse the reviewers of dirty tricks and lying. Marvellous.
[/QUOTE]
Some of those manager’s replies are hilarious. I especially enjoyed the attack on a reviewer who found his room substandard (cracks in the wall, nonfunctioning toilet etc.). Why was the guest so focused on the room, instead of complimenting the lounge and restaurant?
OTOH, that’s pretty much exactly what iTunes and Google’s app store do. You see results presented in order of total reviews with some small weighting for favorability of reviews.
The result is the first couple products in any niche take off and the rest of the hundreds (thousands?) of products in the same niche garner just a couple percent of total sales collectively. The top two or four products are 90-95% of sales.
For a place selling data bits that are cheap to store and that somebody else paid to make that’s a good model. For somebody running a brick and mortar warehouse & logistics system trying to become ubiquitous that’d be ruinous.
I can’t remember where I read this advice, but I saw a magazine article once recommending basically the same method as the OP’s. It said that when looking at the little “Customer Reviews” bar graph you just ignore the 5-star and 1-star bars and look at the middle three. The idea was that 5 and 1-star reviews are often written by shrills, cranks, and idiots, while 4, 3, and 2-star reviews are more likely to be by honest and reasonable people. If a product skews towards 4s it’s probably good, and if it skews towards 2s it’s probably bad.
The reason I saw those reviews was that my wife won a night’s stay and high-brow dinner at the hotel - so I asked around and met the manager who’d written those responses in person, after quaffing some fine wine. It is actually a ridiculously nice hotel, so I think he was right to defend it (I encouraged him to carry on in that vein).
I’m more wary of 5-star reviews if there are only a handful of reviews for something all together.
If there are hundreds of reviews a lot of the 50 5-star ones are probably people who actually liked the product. If there are 7 reviews and 3 or 4 are 5-star…I’m going to see if someone else is selling it and has more reviews to read before I decide.
We read a 1-star review of our current dryer. The person’s only complaint was that they didn’t like the 15 second “song” that plays when the load finishes. Even the product description on the store’s website notes it can easily be turned off. :rolleyes:
Same here. Either they’re disproportionately angry about a minor feature/bug like elfkin’s example (often one that’s a plus for me), or they’re only explicable as someone with a general grudge against the creator of whatever’s being reviewed.
I do tend to mentally ‘downgrade’ 5 stars a bit, but they generally read as ‘someone who liked it was a bit over-enthusiastic’.
Which coincidentally highlights my point: I don’t pay any attention to the numerical ratings; I look at what people are saying.
The Google app store, I think, handles this pretty well: Using their patented Google-fu, they go through the review and see when a whole lot of people are saying basically the same thing, in slightly different words, and then tell you what those things are that they’re saying. And then you can judge for yourself how important those things are.
I looked at reviews for restaurants when we were going on vacation. One place had a handful of 5-star reviews and a handful of 1-star reviews, nothing in between. All the reviews were pretty generic – “Great food! Wonderful service!” or “Bad food. Poor service.” I figured the 5-star reviews were owners or staff, while the 1-star reviews were owners or staff of competing restaurants.
I guess I often take the 1-star reviews as being just as biased as the 5-star ones.
The reviews that crack me up are the ones for recipes. I’ve got a bazillion cookbooks, but if I get the urge to try something new, it’s a lot easier and faster to look online.
It’s amazing to me how often someone will give a recipe 5 stars, but the review will say, “I plan on trying this recipe soon – it sounds delicious!”
Or a 3-star review will say, “I thought this was just OK. I didn’t have any limes, so I substituted lemons. I left out the cilantro and cumin because I don’t like them. I don’t have am 8-inch pan, so I used a 9-inch one. The dish was I thought bland and a bit dry.” Very helpful, that.
Only tangentially related to the thread topic, but the one which annoyed me was our yearly performance reviews at my last job. Workers were supposed to fill out everything like it was a bell curve. They might be good at one or two things, but on the whole they should be average or just above average.
About half the employees filled it out one way and the other half maxed everything out. Then the managers had to go through and fill out their review of their workers, but if there was a significant difference between the manager values and the employee values, then that would cause a review crisis because the two didn’t agree.
The solution would seem to be that, as the manager, you would put the baseline about 3/4ths up the rating chart. But then you’re basically agreeing with the employees, whichever way they went. So if they went high, you’re accepting that they’re good. just not quite as good as they say. If they went medium, then you’re accepting that they’re mediocre, just not quite as mediocre as they say. So they employees who filled the thing out correctly end up getting boned by the math, and the employees who filled everything out incorrectly get a juicier bonus, because of the math.
So I guess, hint for those of you out there doing score-based self-reviews…cheat!