arrow left facebook twitter linkedin medium menu play circle

Live Webinar on Feb 28: Authorized Push Payments (APP) Fraud: Trends, Risks, and Your Defense Playbook

June 22, 2023 - Fang Yu

How AI is Defeating Real-time Fraud

$195 billion dollars—that’s the value of real-time payments (RTPs) in 2022. At 35% CAGR, this already staggering number will only continue to rise.

Countries around the world are implementing real-time payment rails, if they don’t have them already. FedNow is one of the United States’ first true attempts to offer real-time payments at scale. But faster payment platforms like Zelle already dominate the money transfer market with total transfer value glimpsing $500 billion in 2021.

When fraudsters see these numbers, their eyes light up and their heads fill with schemes. The schemes, it turns out, aren’t all that different from the ones they’ve been running for years. But with AI tools to assist them, scammers have found ways to take advantage of real-time payments to victimize customers at unprecedented levels.

Machine learning and AI might be tools fraudsters can exploit, but they’re also powering the fight against fraud on the front lines. I believe if we see widespread adoption of AI in fighting real-time fraud, we can stamp down on it globally.

Once I explain exactly how AI detects fraud in real-time—and the ability we have to scale this technology—I believe you will feel the same way.

Adjusting to real-time fraud means filling the gaps

Many banks today will tell you they’re more than ready to offer real-time payments. If you dig deeper though, they may also reveal they have worries about how their current systems detect fraud.

It’s understandable why they feel this way. Banks are used to processing transactions—ACH transfers, wires, etc.—in batches during their business hours. With RTP being 24/7/365, that opens the gates for fraud 24/7/365 too. In fact, fraudsters know this and cleverly exploit it by attacking more on weekends outside business hours.

The ever-present availability of RTP isn’t the only thing the fraudsters exploit. They trick users into authorizing fraudulent payments. AI tools help fraudsters craft phishing emails and orchestrate scams that make victims believe they’re complying with real processes from their bank. Romance scams and money mule schemes are easier to operate thanks to real-time transfer and settlement of payments.

All of this increased fraud begs the question, how do we protect customers in real time? Relying on manual reviews and traditional case management processes isn’t fast enough. We need to cover the gaps and monitor transaction activity all the time.

That means: 1) looking at 360° of customer activity, 2) taking in all network and linkage information to determine the fraud, and 3) automating as much of fraud detection as possible.

A complete view of customer activity

To the first point, we have to take in all information about a user’s actions, not just the transaction. For example, has the user recently updated their phone number? Is the payment recipient brand new to them? Did they call customer service? All of this is important information when determining payment risk.

Linking network information

To the second point, zooming out to look beyond a single account helps us reveal networks of scams where there are multiple victims. Too many systems focus only on the sender side, but this fails with RTP because real people fall for scams and authorize payments to scammers. There are limited fraud signals on a singular personal level. Fraud prevention systems need to go one level up and see who victims are sending money to.

Social graph information can help here as well. Let’s say we have a person send money to a brand new receiver, but many of their friends have transferred with this receiver before. The social graph shows this is a relatively low-risk transaction. But, if that same person sends to a receiver completely outside their network on the social graph, it’s much riskier. In these cases, network linkage graph analysis is crucial to provide real-time signals for detection and investigation.

Automating fraud detection

To the third point, automating fraud monitoring and detection is the only feasible way to catch real-time scams. Manual reviews simply can’t keep up. That’s where AI comes into the fraud fighter’s arsenal and becomes their strongest weapon of defense.

How AI fraud platforms fight fire with fire

Many fraudsters and crime rings rely on AI as a crucial part of scams that trick victims through social engineering. In addition to phishing emails and fake websites, ChatGPT helps fraudsters make detailed fake bank profiles so convincing they can trick veteran fraud analysts. Low-cost labor from other countries already allows fraudsters to run these scams cheaply. AI tools help scale up the output and magnify the damage.

These fake identities are an especially dangerous threat when it comes to real-time scams like Zelle fraud. But the same AI capabilities fraudsters use to make these fake profiles in bulk are the ones we can use to detect them.

Right now, many fraud detection systems are reactive. Rules-based models train on past cases of fraud and learn to flag certain patterns for manual review. These systems aren’t built to stand up to AI-powered real-time fraud because fraudsters test which patterns are detected and change too quickly for the systems to keep up.

What we need is an adaptive fraud platform—one powered by AI to learn in real time. That way when patterns change, AI can still detect fraud and even predict new patterns of fraud before they happen. Unsupervised machine learning already does this, and platforms like DataVisor’s leverage device and behavioral intelligence to spot the most sophisticated manipulation techniques.

That’s not the extent of the role AI can play, though. Fraud analysts can use AI when they see a new attack pattern to speed up documentation and implementation of new defense strategies. That cuts the time fraudsters have to take advantage of vulnerabilities before they’re closed and puts fraud-fighting teams on the same speed that fraudsters work at.

Of course, things like customer education about scams and continual learning and retraining of models are key in the fight. Ideally, we would stop users from engaging with suspicious receivers similar to how web browsers warn users before letting them access a shady website. But to truly win against fraudsters, we need to control machine learning and put it to work fighting in the good guys’ corner.

FedNow fraud

Getting ready for FedNow and real-time’s mainstream moment

As I discussed in a recent webinar with my industry colleagues from Fiserv and Ekata, FedNow’s arrival in July marks a significant moment in the history of real-time payments. When we talk to banks and credit unions they’re excited about the boost it will bring to customers as they send payments and transfers.

But digital banking and fintech tools have been in the spotlight for a while. In fact, they’ve both grown exponentially, with consumers using Zelle and digital banking apps in huge numbers, especially since the pandemic. Financial institutions that don’t offer RTP need to add it or they risk falling behind in the eyes of customers who expect real-time payment options by default.

It’s also crucial to put the infrastructure in place to mitigate real-time fraud in a way that prioritizes customer experience. Manual reviews, false positives, and damaging real-time scams need to be addressed in a way that still allows customers to enjoy the speed of RTP.

The bad news is real-time fraud will only increase. The good news for banks is AI solutions are readily available for them to leverage right now.

How to start using AI in your fraud platform

The best way to start detecting more fraud and stopping real-time scams with AI is to pick a best-in-class solution. One that is ready-made to quickly fit into your fraud strategy and start improving fraud detection and operational efficiency right away, all while minimizing false positives to protect good customer experience.

That’s exactly what you get in DataVisor’s turnkey solution. Using our patented machine learning technology, native device intelligence, and a robust decision engine, you can protect all payment channels and squash real-time scams.

Let’s highlight a few key features our Real-time Payment Solution offers:

  • Pre-built, standardized event schemas to boost flexibility to map standard fields to custom while complying with ISO20022 for unparalleled fraud detection.
  • 10+ pre-configured rulesets that cover popular fraud scams like account takeover (ATO), new account fraud, and new destination fraud.
  • Bespoke supervised and unsupervised machine learning models, with our patented unsupervised solution being the only real-time production-grade solution with >95% accuracy.
  • Ready to use third-party data connectors for device data, identity data, dark web data, and more.
  • dEdge SDK digital device intelligence data provides advanced device and digital behavior data to enhance fraud detection.
  • Up to 5X increased operational efficiency to boost review efficiency via auto-decision configuration, bulk review capabilities, and graph linkage visualization.
  • Fast Integration: 90% of our clients go live within six weeks.

This technology is already in use at numerous Fortune 500 companies worldwide. Our award-winning detection platform only continues to evolve and improve, taking away the gaps fraudsters can attack. If you’d like to learn more about how we do it, and how AI fraud detection should ideally fit to solve your specific prevention needs, let’s have a chat!

about Fang Yu
Fang spent 8 years at Microsoft Research developing big-data algorithms and systems for identifying various malicious traffic such as worms, spam, bot queries, hijacked accounts, and fraudulent financial transactions across a wide range of Microsoft products.
about Fang Yu
Fang spent 8 years at Microsoft Research developing big-data algorithms and systems for identifying various malicious traffic such as worms, spam, bot queries, hijacked accounts, and fraudulent financial transactions across a wide range of Microsoft products.