arrow left facebook twitter linkedin medium menu play circle

Digital Fraud Wiki

Your source for the latest fraud intelligence, insights, research, and commentary.

Content Abuse

What is Content Abuse?

When discussing the term “content abuse,” “content” refers specifically to “user-generated content” (UGC), which is essentially any type of content posted online by an online user. User-generated content can include everything from product reviews on e-commerce sites and service reviews on travel sites; to consumer complaints on customer service platforms and user comments on company forums; to images uploaded to social media and copy uploaded to blogs, and more. 

Different businesses and platforms incorporate varying degrees of UGC—social media platforms, for example, are almost entirely comprised of user-generated content, whereas financial institutions may limit UGC to their customer service platform.  

Content abuse can be defined as the intentional posting of UGC that is fake, abusive, fraudulent, deceptive, or otherwise toxic and ill-intentioned.

What Should Companies Know About Content Abuse?

User-generated content is a primary driver of authenticity and trust, which are, in turn, core components of positive brand reputation. According to numbers released earlier this year by Stackla: “79 percent of people say user-generated content highly impacts their purchasing decisions,” and “90 percent of consumers say authenticity is important when deciding which brands they like and support.” 

Content abuse can dangerously subvert brand reputation by eroding trust and authenticity. As noted in DataVisor’s Q3 2019 Fraud Index Report, “if users can no longer trust the content they engage with on a particular platform, they will eventually cease to use the platform at all, and when customer churn increases, investors worry, advertisers depart, and businesses struggle.”

Content abuse does not occur in a vacuum, and there is no one single technique that fraudsters use to commit these kinds of attacks. Content abuse lives within a complex ecosystem that comprises everything from data breaches, identity theft, and account takeover, to phishing, buyer-seller collusion, and adversarial machine learning. 

While malicious content like spam has been around for decades, automation and the widespread use of bots have made it much easier for fraudsters to scale their activities, and the ongoing democratizing of online access and the proliferation of new apps, APIs, and third-party service providers present vastly more points-of-entry than previously existed. Additionally, recurring data breaches provide fraudsters with regular supplies of data that can be used to create new fake accounts. 

Today, fraudsters use bots to engage in content abuse techniques at massive scale, and bad actors have gotten highly sophisticated at obfuscating their intentions and impersonating legitimacy with their fake and malicious accounts.

How to Prevent Content Abuse

Modern fraudsters engaged in content abuse activity can create and post massive amounts of content quickly, and adjust their techniques with equal rapidity. They are able to create vast hordes of fake accounts and use them to disseminate malicious content in ways that are increasingly difficult to detect. Fraud solutions that rely on manually-created features, rules, and blacklists, to try and keep pace with rapidly evolving content abuse techniques should not be considered viable solutions.

At DataVisor, we observe that fraudsters typically automate the generation of content for fake accounts under their control. This creates patterns that are discernible with the right technologies and solutions in place. When components such as introductions, messages, names, or nicknames are more similar across accounts than what is generally to be expected for unrelated users, this is indicative of coordinated activity. Utilizing a combination of unsupervised machine learning (UML) technology and deep learning, we can train models to recognize when a group of user accounts share suspiciously “similar” content—without having to even define what “similar” means.

Using this approach, we can address content abuse at the source by flagging and neutralizing suspicious accounts before they’re ever used to generate malicious content. Proactivity is the only real solution because reactive approaches that kick in after malicious content is posted are uniformly too late—the reputation damage will have already been done.