Fraud Defenses Crowdsourced Abuse Reporting Device Fingerprinting Email Reputation Service IP Reputation Service SR 11-7 Compliance Supervised Machine Learning Two-Factor Authentication (2FA) Unsupervised Machine Learning Fraud Tactics Bot Attacks Call Center Scams Device Emulators GPS Spoofing P2P VPN Networks Phishing Attacks URL Shortener Spam Web Scraping Fraud Types App Install Fraud Application Fraud Bust-Out Fraud Loan Stacking Synthetic Identity Theft URL Shortener Spam What is URL Shortener Spam? URL Shortener Spam is a technique used by spammers to evade detection and blacklisting for the purposes of spreading malicious content. The fraud is achieved by “masking” a malicious link through the use of a link shortening service such as Bitly, or any other of a number of popular services. While many of these services do ultimately get blacklisted, many continue to be abused, and new providers continue to emerge. The goal of this technique is to make obvious spam links appear legitimate, thereby tricking unsuspecting victims into clicking the links. Link shortening services generate a “short” version of a long URL to facilitate easier sharing. The “short” link still redirects to the original page, but often contains a hash (or some other unique identifier) of the original URL that masks the original string. Spammers take advantage of this feature to hide the true landing page of malicious URLs. What Should Online Platforms Know About URL Shortener Abuse? Spammers and fraudsters promote a variety of products and services using URLs, many of which are not legal or legitimate. Examples include sites selling black market pharmaceuticals or pornography. Often, the URLs for these sites are giveaways as to their content, so spammers mask the URLs in hopes of luring unsuspecting visitors. Cybercriminals get paid for those clicks, and this activity can be lucrative. Fraudsters also try to direct victims to fake sites that can infect visitors with malware, or capture valuable login details. There are numerous ways fraudsters can profit from diverting the traffic and clicks of innocent users. Any platform that depends on user-generated content must of necessity strive to maintain the highest standards of trust, and pervasive spam content directly subverts user trust. Social media platforms and reviews sites are examples of the kinds of platforms currently battling spam content, and these sites are often riddled with posts containing malicious links masked by shortened links. As reported by DataVisor, spammers typically host their spam infrastructure on “bulletproof” hosts rented out from cloud services or other underground service providers. Given the cost of spam infrastructure, spammers are incentivized to reuse the landing site for subsequent spam campaigns. DataVisor Empowers Account-Level Detection To Prevent Spam Abuse Traditionally, spam detection solutions have relied on reactive strategies to purge platforms of malicious and abusive spam content. Given the prevalence of bots across the fraud landscape, this is no longer a viable approach—the scale is simply too massive. Addressing the enormous challenges posed by global spam attacks requires moving away from a “transaction level” mindset to one focused on the fraudsters behind the fraud. Transaction-level detection is inherently reactive—it involves identifying an action as malicious, and then trying to respond to it as rapidly and as comprehensively as possible, in hopes of limiting the damage. Account-level detection identifies originating users and accounts as malicious before they can launch attacks. This is made possible through the use of AI-powered solutions such as DataVisor’s dCube, which leverages the power of proprietary unsupervised machine learning algorithms, big data infrastructure, and a vast storehouse of global intelligence data to expose digital footprints, correlate patterns, and surface shared attributes that indicate coordinated fraudulent activity in real time.