User-Generated Content Offers Many Positives, But the Threat of Content Abuse is Omnipresent
User-generated content (UGC) plays an increasingly vital role in our modern, digital economy. UGC is uniquely impactful in the retail sector, where content such as customer feedback and product reviews can literally make or break launches, promotions, and even reputations. Platforms that support UGC also offer dynamic opportunities to engage directly with customers.
However, while UGC offers a great deal that is positive, there are serious concerns with the medium as well. Where there's UGC, there's content abuse.
DataVisor’s Q3 2019 Fraud Index Report offers a deep dive into why content abuse is not just an ongoing problem, but a worsening one. The report details the advanced techniques fraudsters use to commit content abuse, and outlines the risks businesses face when they don’t address the problem proactively.
The importance of UGC can hardly be overstated. As reported by Stackla, “79 percent of people say user-generated content highly impacts their purchasing decisions,” and “90 percent of consumers say authenticity is important when deciding which brands they like and support.”
To compile the proprietary research for the report, DataVisor analyzed more than 80 billion events, 758 million users, 368 million IP addresses, 1.05 million email domains, and 229,000 device types. In the process, an average of over 140 million unique pieces of UGC were observed each day in DataVisor’s global intelligence network.
On average, 7 percent of all posts, listings, comments and messages are found to contain malicious or fraudulent content. DataVisor research indicates this number varies by platform and content type. Social media and email service platforms have a noticeably higher rate of content abuse than marketplaces or review sites, where content is typically better curated.
It's ironic that what makes UGC possible in the first place is simultaneously why content abuse is so rampant. At issue is the rapidly expanding democratization of online access. Today, there are countless more “entry points” that enable fraudsters to introduce malicious content into online ecosystems.
There's also the matter of scale. Armed with advanced automation capabilities and the power to unleash vast armies of bots, modern fraudsters can now create and manage thousands of fake accounts and use them to perpetrate all kinds of different content abuse attacks. Worst of all, they can do so with great rapidity. The new Fraud Index Report finds that 60 percent of new fake accounts commit content abuse within two hours of being created, and 76 percent do so within 24 hours.
Given the open access fraudsters have at their disposal, and the extent to which they can leverage this access to continue producing and injecting toxic content, it might seem inevitable that content abuse will be an ongoing concern. However, new technologies offer us a viable means to neutralize the problem.
No matter how cleverly they attempt to obfuscate their efforts, nor how vast and wide-ranging the scale of their illicit activities, fraudsters still leave detectable digital trails, and this is where artificial intelligence can be leveraged to combat their efforts. Machine learning models excel at identifying subtle relationships in large volumes of data, and unsupervised learning algorithms can do so without relying on labels. Therefore, these tools are ideal for addressing coordinated attacks at scale, particularly when used in tandem with deep learning, which helps enable models to recognize when a group of user accounts share suspiciously “similar” content — without having to explicitly define what “similar” means.
As the new Fraud Index Report makes clear, user-generated content can be both a blessing and a curse. However, with the right solutions and technologies in place, businesses can reap the benefits while mitigating the risks.
Ting-Fang Yen is the director of research for DataVisor, a leading fraud detection and prevention company.