The emergence of generative artificial intelligence tools that allow people to efficiently produce novel and detailed online reviews with almost no work has put merchants, service providers and consumers in uncharted territory, watchdog groups and researchers say.
Phony reviews have long plagued many popular consumer websites, such as Amazon and Yelp. They are typically traded on private social media groups between fake review brokers and businesses willing to pay. Sometimes, such reviews are initiated by businesses that offer customers incentives such as gift cards for positive feedback.
But AI-infused text generation tools, popularized by OpenAI's ChatGPT, enable fraudsters to produce reviews faster and in greater volume, according to tech industry experts.
The deceptive practice is illegal in the U.S.
Where are AI-generated reviews showing up?
People are also reading…
Fake reviews are found across a wide range of industries, from e-commerce, lodging and restaurants, to services such as home repairs, medical care and piano lessons.
The Transparency Company, a tech company and watchdog group that uses software to detect fake reviews, said it started to see AI-generated reviews show up in large numbers in mid-2023, and they have multiplied ever since.
For a report released in December, The Transparency Company analyzed 73 million reviews in three sectors: home, legal and medical services. Nearly 14% of the reviews were likely fake, and the company expressed a "high degree of confidence" that 2.3 million reviews were partly or entirely AI-generated.
In September, the Federal Trade Commission sued the company behind an AI writing tool and content generator called Rytr, accusing it of offering a service that could pollute the marketplace with fraudulent reviews.
The FTC, which banned the sale or purchase of fake reviews, said some of Rytr's subscribers used the tool to produce hundreds and perhaps thousands of reviews for garage door repair companies, sellers of "replica" designer handbags and other businesses.
It's likely on prominent online sites, too
Max Spero, CEO of AI detection company Pangram Labs, said the software his company uses has detected with almost certainty that some AI-generated appraisals posted on Amazon bubbled up to the top of review search results.
Pangram Labs has done detection for some prominent online sites, which Spero declined to name due to non-disclosure agreements. He said he evaluated Amazon and Yelp independently.
Many of the AI-generated comments on Yelp appeared to be posted by individuals who were trying to publish enough reviews to earn an "Elite" badge, which is intended to let users know they should trust the content, Spero said.
To be sure, just because a review is AI-generated doesn't necessarily mean its fake. Some consumers might experiment with AI tools to generate content that reflects their genuine sentiments. Some non-native English speakers say they turn to AI to make sure they use accurate language in the reviews they write.
"It can help with reviews (and) make it more informative if it comes out of good intentions," said Michigan State University marketing professor Sherry He, who has researched fake reviews. She says tech platforms should focus on the behavioral patters of bad actors, which prominent platforms already do, instead of discouraging legitimate users from turning to AI tools.
What companies are doing
Prominent companies are developing policies for how AI-generated content fits into their systems for removing phony or abusive reviews. Some already employ algorithms and investigative teams to detect and take down fake reviews but are giving users some flexibility to use AI.
Spokespeople for Amazon and Trustpilot, for example, said they would allow customers to post AI-assisted reviews as long as they reflect their genuine experience. Yelp has taken a more cautious approach, saying its guidelines require reviewers to write their own copy.
The FTC's rule banning fake reviews, which took effect in October, allows the agency to fine businesses and individuals who engage in the practice. Tech companies hosting such reviews are shielded from the penalty because they are not legally liable under U.S. law for the content that outsiders post on their platforms.
Tech companies, including Amazon, Yelp and Google, have sued fake review brokers they accuse of peddling counterfeit reviews on their sites.
Spotting fake AI-generated reviews
Consumers can try to spot fake reviews by watching out for a few possible warning signs, according to researchers. Overly enthusiastic or negative reviews are red flags. Jargon that repeats a product's full name or model number is another potential giveaway.
When it comes to AI, research conducted by Balázs Kovács, a Yale professor of organization behavior, has shown that people can't tell the difference between AI-generated and human-written reviews.Â
However, there are some "AI tells" that online shoppers and service seekers should keep it mind. Panagram Labs says reviews written with AI are typically longer, highly structured and include "empty descriptors," such as generic phrases and attributes. The writing also tends to include cliches like "the first thing that struck me" and "game-changer."