arrow left facebook twitter linkedin medium menu play circle

Digital Fraud Wiki

Your source for the latest fraud intelligence, insights, research, and commentary.

Crowdsourced Abuse Reporting

What is Crowdsourced Abuse Reporting?

Crowdsourced abuse reporting (CSAR) is where a platform includes tools for users to report abusive or fraudulent behavior. CSAR is a common feature of social networks and online marketplaces. It is often one of the first tools emerging social networks implement when they first experience spam and abuse on their platforms- a network will provide a button or form that allows users to report and classify spam or abuse. Online marketplaces typically provide tools so that users can not only report abusive behavior such as spam but also when they have become victims of fraud. These reports are sent to a moderation team or a trust and safety team for manual review and judgment.    

What Should Companies Know about CSAR?

CSAR is an inherently flawed strategy for addressing abuse and fraud. Good user experience is crucial to the success of every online business, yet use of CSAR by definition depends on users having already experienced fraud and abuse. And a user who experiences abuse or fraud on a platform is more likely to leave rather than report it. Also, CSAR puts the onus on users to protect the platform as opposed to detecting and removing abusive and fraudulent accounts proactively. 

Another reason CSAR is an unreliable solution for preventing abuse and fraud is that the process is exceedingly slow- manual review is required before users receive any reparations. Bad actors use mass registration tools to create vast numbers of fake accounts and carry out malicious activities automatically. Humans can’t keep up with fraud and abuse generated by automated tools. For example, the level of abuse users experience on the Twitter platform is higher than any CSAR solution could ever handle:

  • Amnesty International’s Troll Patrol Project, found that In 2017, an estimated 1.1 million tweets sent to the 778 women in the study were deemed “abusive” or “problematic.” This amounts to one abusive or problematic tweet every 30 seconds on average.

It is crucial that companies move from outdated tools such as CSAR to modern solutions such as AI-powered contextual fraud detection. For example, Twitter provides a number of tools for users to report platform violations, including fake accounts and spam. However, Twitter also uses machine learning to detect fake accounts and abusive content faster than the CSAR solution embedded throughout the platform. Before becoming a DataVisor client, Momo, a mobile-based social networking app, was plagued by fake accounts that were engaging in a range of malicious activities. Among those activities were aggressively pushing spam, illegal commerce, phishing, prostitution ads, and abusive content. The mobile app company abandoned its traditional approach to fraud in favor of a method that leverages sophisticated artificial intelligence (AI). The new AI-powered approach enabled the Momo platform to detect 45% more fraud.

Another flaw of CSAR is that the system itself is open to abuse- malicious users can spam the reporting system, flagging posts that aren’t fraudulent or abusive. Few, if any, CSAR solutions include filters to prevent users from submitting false or unfounded complaints. So, it’s left to a manual review team to determine if the complaints submitted through CSAR are warranted.

As a solution, CSAR doesn’t treat the cause- bad actors mass registering fake accounts for the sole purpose of committing abuse or fraud. CSAR only addresses the symptoms- fraudulent and abusive activities generated from the fake accounts. To counter flawed approaches like CSAR that are reactive and symptom-oriented, what is needed are proactive solutions that can leverage contextual detection capabilities and holistic data analysis to detect mass-controlled abusive accounts.

Prevent Platform Abuse and Fraud with DataVisor

Crowdsourced abuse reporting is a perfect example of why proactive fraud management matters. The choice between reactive and proactive is a simple one, and is best understood through the lens of customer experience—do you want your users to experience fraud and abuse first, then report it? Or do you want them to be free of fraud and abuse from the start? Social platforms are notoriously vulnerable to malicious actors and activities; paradoxically, they also rely heavily on reputation and user trust for their success. The solution to this challenge lies in adopting a proactive approach to fraud and abuse prevention. You need the ability to identify and neutralize pending abuse and fraud before the actual attacks can launch. This is possible only through the deployment of advanced data analysis and adaptive algorithms that can surface the hallmarks of an attack-in-the-making, and which can correlate patterns of behavior to link seemingly independent actions that are in fact components of coordinated fraud and abuse campaigns. Bot-scripted attacks, for example, are pervasive across social platforms, and they’re often massive in scale. Manual review and reactive analysis can never hope to keep pace with bot-powered fraud and abuse, but AI-powered solutions such as DataVisor’s dCube are every bit as agile and fast-evolving.

Additional References

Blog Post: Defeating Mass Registration with Unsupervised Machine Learning

Source: Twitter is Indeed Toxic for Women, Amnesty Report Says, WIRED

Source: Twitter Still Can’t Keep Up With its Flood of Junk Accounts, Study Finds, WIRED