Is AI making fake reviews worse?
Dec 23, 2024
The emergence of generative artificial intelligence tools that allow people to efficiently produce novel and detailed online reviews with almost no work has put merchants, service providers and consumers in uncharted territory, watchdog groups and researchers say.
Phony reviews have long plagued many popular consumer websites, such as Amazon and Yelp. They are typically traded on private social media groups between fake review brokers and businesses willing to pay. Sometimes, such reviews are initiated by businesses that offer customers incentives such as gift cards for positive feedback.
But AI-infused text generation tools, popularized by OpenAI’s ChatGPT, enable fraudsters to produce reviews faster and in greater volume, according to tech industry experts.
The deceptive practice, which is illegal in the U.S., is carried out year-round but becomes a bigger problem for consumers during the holiday shopping season, when many people rely on reviews to help them purchase gifts.
Fake reviews are found across a wide range of industries, from e-commerce, lodging and restaurants, to services such as home repairs, medical care and piano lessons.
The Transparency Company, a tech company and watchdog group that uses software to detect fake reviews, said it started to see AI-generated reviews show up in large numbers in mid-2023 and they have multiplied ever since.
For a report released this month, The Transparency Company analyzed 73 million reviews in three sectors: home, legal and medical services. Nearly 14% of the reviews were likely fake, and the company expressed a “high degree of confidence” that 2.3 million reviews were partly or entirely AI-generated.
Read the full story.