Skip to main content

The 4-Star Lie: Inflated Ratings Ruining 'Expert' Tech Reviews

Let's play a game. I'd like you to read the following review summaries and then guess the product rating on a 5-star scale, with a higher score being better.

Review #1

Pros: The Lenovo LaVie Z is incredibly light, with a powerful processor and a decent set of ports and connections.

Cons: It's more expensive than other premium, slim laptops; battery life is merely OK, and a frustrating keyboard makes typing a pain.

Review #2

Pros: Currently, the lightest laptop you can buy. 2,560-by-1,440 WQHD-resolution display. Surprisingly good performance thanks to an Intel Core i7-5500U processor.

Cons: Lightweight design feels a bit flimsy. No touch display. Cramped keyboard with confusing layout. Small port selection. Short battery life.

Both review summaries of the LaVie Z seem pretty mixed to me. Did you guess 3 stars? Maybe less? You'd be wrong, because the first rating is actually 4 stars out of 5 and the second review is 3.5 stars. We gave this notebook 2.5 out of 5 stars on Laptop Mag (which means not recommended), because of its confusing keyboard and short battery life. If you can't type on a laptop with ease, what good is a zippy Core i7 CPU?

The disconnect between what some reviewers experience and what they award a product is not exactly new. But in light of Amazon cracking down on fake user reviews, it seems fitting to have a conversation about inflated product ratings from so-called experts. I'd argue these reviews do even more harm than fake reviews.

As the editor in chief of two websites focused on reviews, Tom’s Guide and Laptop Mag, it's not my intent to throw stones at particular publishers, which is why I'm not naming the outlets in question for these reviews or others I mention here. There are also plenty of sites that are producing fair and honest reviews that are helpful to consumers. However, if you're reading this and you know you're being called out, it's probably time to take a good look in the mirror. Your kid-glove ratings are hurting shoppers and ultimately the companies for which you're bending over backwards.

Here's another not-so-glowing endorsement masked by another confusing 4-star rating, this time for the exceedingly mediocre HTC One M9 phone: "A luxury design that forgets about the basics." And another 4-star gem: "The updated HTC One M9 packs speed and software improvements into a handset that remains lustworthy in middle age, but it doesn't exceed the competition where it counts." And one more: "The iterative HTC One M9 remains a physical jewel and a strong Android smartphone, but fails to take the next step."

Call me nutty, but "iterative" and "fail" don't exactly jibe with 4 stars out of 5. All of these reviewers called out many of the same negatives we found in our 2.5-star review on Tom's Guide, including a mediocre camera, lackluster battery life and a design that felt the same as last year's model. (Actually, the design is worse, thanks to the strangely sharp edges.) Other outlets were more forthcoming about the One M9's flaws, with one saying right in the headline, "A little bit meh," and another rightly questioning whether HTC had "lost sight of the big picture."

I'm all for allowing subjectivity in ratings, but I'm sick of seeing head-scratchingly high scores that contradict the rest of the review.

Inflated reviews can sometimes be explained by irrational exuberance or awarding points for innovation over practicality, but they can also be caused by a desire to please vendors and procure a review unit promptly the next time around. Some reviewers may also feel pressure to contradict their own critiques with gold stars to help win big ad campaigns. After all, ads do pay the bills for media companies. But, ultimately, shoppers lose when reviewers aren't frank or brave enough to call 'em like they really see 'em.

I've second-guessed some of my own product ratings, too, especially earlier in my career. Just look at my review of the Slacker audio player from 2008. I suppose I was enamored with the concept, but I contradicted myself for sure. As I've grown older, though — and hopefully a bit wiser — I've learned to put myself in the shoes of the shopper more. We've also instituted a policy for reviews that compels editors and writers to discuss and debate star ratings before one is applied.   

There's a reason Amazon is starting to outrank many media outlets in search results for reviews. Even among the sea of fake anecdotes published in Amazon reviews, it's fairly easy to spot genuine experiences among product owners, which helps shoppers decide whether a given gadget is worth the money. Professional product reviews graded on a too-kind curve will only push more people into the direction of e-tailers. Now Amazon is cracking down on fake user reviews using a new algorithm that will bump up reviews deemed most true and helpful.

Manufacturers are usually smart enough to read between the lines, but inflated ratings ultimately hurt them, too. I've had many a conversation with companies defending our criticisms, and then a product manager thanks us for being honest and uses our feedback to improve the company's products. Have there been some companies that have taken us out of the first wave of product reviewers after receiving a poor rating? It's very rare. But I'd rather be kicked to the curb for telling the truth than receive 5 stars from a vendor for an ingratiating rating.

Mark Spoonauer is the editor in chief of Tom's Guide and Laptop Mag. Follow him at @mspoonauer. Follow Tom's Guide at @tomsguide, on Facebook and on Google+.

  • beegmouse
    Similarly invading the ratings systems are skewed Statistics.

    You can give an Item 5 stars, but not 0 Stars, so the user only has 4 options, not 5

    The worst product/etailer in the world can still average 3.5 out of 5 Stars with 2 one Star reviews and one 5 star review 1+1+5/3=3.6

    When realistically it should be getting 0+0+5/3=1.6

    It's very misleading

    Check out trust pilot and their ratings of simply electronics, a terrible e-tailer.
    Reviewers have 4 options which get translated into a 10 point average.

    And even though 30% of reviews are as low as possible it still manages 6/10
    and gets ranked as "Average"
  • Vlad Rose
    User reviews aren't very reliable for products either. If a person has a problem with an online purchased product, they're more than likely to write a bad review than a person who has a good experience with it. Also, some of the bad reviews have to deal with a shipping problems they had, or user error, instead of the actual product.

    Whenever I get ready to purchase any product, I try to read as many reviews on sites as I can and comments made, not the star rating or quick synopsis. It makes for a rather long process; but more often than not pays off in the long run.
  • SamSerious
    Actually, the problem is not subjectivity of the reviews, but it is the try to be less subjective by adding an objective score. And that score does distort the meaning of the review inspite of underlining it.
    Therefore a truly subjective conclusion at the end of a product test is much more helpful as it leaves the author enough space to explain why the product may or may not be useful under what conditions. If there is a super-short "conclusion" with to negative and to positive statements, many users will only read these and cannot get a picture of what the author thought abouit it.
    If you really want to deliver a short pro/contra conclusion, it is crucial to add a text box with the personal thoughts of the tester.
  • ubercake
    I like to look at as many reviews as possible. I make sure I cover the high-star and low-star and in-betweens paying more attention to the comments and looking for consistency between reviews. This has served me fairly well over the years so I don't necessarily trust the star ratings, but if I do see a product with a substantial number of reviews with a one-star or two-star rating, I'm likely to discount it from my shopping list.