So, another flurry of stories about fake reviews on Yelp, TripAdvisor, and elsewhere — this time prompted by a Cornell algorithm that supposedly can, with 90 percent accuracy, spot a bought-and-paid-for fake.
(The New York Times and NPR, among others, covered this.)
As I wrote several years ago, it’s getting ever harder for humans — including me — to identify fakery. It’s nice that an algorithm can improve the odds, at least for now, but let’s not get ahead of ourselves:
- Even the best algorithm won’t finger a talented faker, and it also won’t identify a talented programmer who is posting fake reviews algorithmically. (I don’t know for a fact that the latter occurs. But since I can imagine a way to do it, I expect someone smarter than me is already making money at it. Story of my life.)
- Publicity about fake-sniffing algorithms and their methodologies will increase the average “quality” of fake reviews, making them even harder to spot for humans and algorithms alike. We’ve seen this type of arms race before: Algorithms are like antibiotics; they invite the enemy to evolve.
There’s no foolproof way to spot a fake review. However, I do have a foolproof way to spot a genuine review:
Was it written by a friend? If so, it is genuine.
Evolve around that!
Most genuine reviewers use common phrases to express satisfaction or discontent. An abundance of superlatives, overly flowery criticisms or flattery should prompt suspicion.