arrow left facebook twitter linkedin medium menu play circle
August 21, 2019 - Alex Niu

Helping Banks Stop Promotion Abuse and Gamer Fraud

As modern banks work to attract new customers, fraudsters are gaming the system to take advantage of sign-up bonuses. Stopping them requires proactive, AI-powered solutions.

In today’s hyper-competitive financial services sector, banks are facing ever-increasing pressure to grow their customer bases. While this climate is powering a great deal of innovation in terms of the new promotions, services, and incentives companies are rolling out, there are accompanying vulnerabilities that fraudsters are exploiting for criminal gain. Banks are looking for new ways to effectively balance growth and risk, and one particular question is proving a difficult one for organizations to answer:

How do you continue to deliver excellent customer experiences while simultaneously preserving the highest standards of safety and security?

Promotion Abuse: The Challenge

Sign-up bonuses are an excellent example of this challenge. Financial organizations are seeing strong results from using rewards to incentivize the creation of new accounts, but in tandem with the rise of this approach, companies are also seeing an increase in abuse of these promotions—fraudsters trying to “game” the system by signing up to get a bonus, with no intention of ever spending beyond the minimums required. Some institutions have already implemented strategies to manage this issue: Chase’s “5/24” rule is a well-known example. As described in this post from The Points Guy, “In order to be approved for any Chase card subject to 5/24, you cannot have opened five or more personal credit cards across all banks in the last 24 months.”

The goal for the banks is clear; they want good, long-term customers to sign up, not these so-called “gamers.” The 5/24 rule is one way to try and weed out users looking to exploit a sign-up bonus. Approaches like this fall short, however, in the face of scale. 

More and more, the scenario is not a single fraudster applying for a single bonus. Instead, these instances are turning into full-fledged, coordinated attacks, in which malicious actors buy and steal large amounts of personal information, which they then use to apply for high volumes of new accounts with associated sign-up bonuses. They pocket the bonus money, then use the accounts to spend with fake merchants (which they often own themselves), thereby continuing to meet the purchase minimums. In this way, they make money at little or no expense on their side. Detecting coordinated fraud rings operating at this kind of scale, involving multiple parties,  is a whole other kind of challenge, and the damage for financial institutions is multi-fold—wasted marketing money, fewer good customers, an increase in fake merchants, and more.

Promotion Abuse: The Solution

The most challenging aspect of trying to solve this issue is that the fraudster behavior so closely mimics good user behavior. After all, good customers like sign-up bonuses too, and we’re not being malicious when we sign-up for a new account, rush to make the minimum purchases required, claim the bonus, then return to being mindful of our spending habits. We just know a good deal when we see one. It’s still worth it for banks to incentivize us, because we’re likely to continue using the card—albeit responsibility, and hopefully, we’re good, long-term customers.

So, how can businesses differentiate between good users and malicious ones?

Detection at Scale
The answer has to do with scale. For a fraudster, scale is necessary. Committing fraud one account at a time is too slow, and the odds of success are too low. Instead, fraudsters launch attacks at scale. If 999 out of 1000 fake accounts get blocked, that’s ok—all they need is one to win. For those of us on the side that’s trying to stop the fraudsters, scale is somewhat paradoxically to our advantage as well. While it’s very hard to spot a single act of fraud, it’s possible to expose patterns and connections between multiple actions and accounts, and in this way, we can reveal coordinated attacks coming from sophisticated fraud rings. 

Patterns and Correlations
For example, we can review the behavior of New Customer A. In doing so, we can see that, immediately after opening a new account, New Customer A spends a specific amount at Merchant 1. We then look at Merchant 1, and see similar purchases being made by New Customers B, C, and D; similar amounts, similar timeframe. So then we look at the application data for New Customers A, B, C, D, and we note commonalities: similar phone numbers, similar email addresses. When we dig further into transaction data for Merchant 1, we discover that its only customers are New Customers A, B, C, D. At this point, it’s likely that the four customers are gamers, and the merchant is fraudulent. 

Proactive Detection with Unsupervised Machine Learning
At DataVisor, we approach a scenario like this proactively, by shifting the focus from the transaction level (analyzing transactions to determine whether they’re legitimate or not) to the account level (analyzing new accounts to expose suspicious correlations). Using unsupervised machine learning (UML), our solution can analyze new account data in real time, and surface suspicious patterns that suggest coordinated, fraudulent activity. If a significant number of new accounts are opened within a similar time frame that share suggestive commonalities, our UML-powered solution will “cluster” these accounts together and flag them as being suspicious, thereby preventing them from being used in a new attack. 

Best of all, because UML does not require existing labels or data, it can detect even new and unknown fraud types, enabling organizations to take early action, and prevent damage before it occurs. This makes UML an ideal approach for dealing with even the most sophisticated emerging fraud techniques.

The DataVisor Approach

At DataVisor, our approach combines cutting-edge AI and machine learning technologies to correlate fraudulent and suspicious patterns across billions of accounts in real time. Patented and proprietary unsupervised machine learning algorithms work without labeled input data to automatically detect new and previously unidentified fraud and abuse patterns. Our solutions are informed with extensive domain expertise and bolstered with global intelligence derived from the 4.2B accounts we currently protect. The example of promotion abuse and gamer fraud we’ve discussed here is just one attack type that modern financial institutions have to contend with. Regrettably, there will be more. Modern fraudsters have become very adept at changing their tactics to stay ahead of detection approaches. Fortunately, in unsupervised machine learning, we have the means to match them every step of the way.

about Alex Niu
Alex Niu is Director of Solution Engineering at DataVisor. He brings a decade of experience in the financial industry to his role, with a focus on risk management analytics. He was previously Director of Decision Science at American Express, where he led a team of data scientists developing and implementing advanced machine learning solutions.
about Alex Niu
Alex Niu is Director of Solution Engineering at DataVisor. He brings a decade of experience in the financial industry to his role, with a focus on risk management analytics. He was previously Director of Decision Science at American Express, where he led a team of data scientists developing and implementing advanced machine learning solutions.