arrow left facebook twitter linkedin medium menu play circle

Digital Fraud Wiki

Your source for the latest fraud intelligence, insights, research, and commentary.

How Deepfakes Are Made and How Fraudsters Use Them

What is a Deepfake?

A deepfake is a piece of synthetic media that uses deep learning techniques to create or change content. The term “deepfake” comes from the combination of “deep learning” and “fake”. A subset of machine learning, deepfakes are usually video or audio creations.

Deepfake videos often involve placing one person’s face onto another person’s body. This is often an attempt to make it seem that the person is doing or saying something they never did. Audio deepfakes can replicate the voice of a particular person, even if that person never said the words in the recording.

The technology behind deepfakes has advanced so rapidly that discerning between legitimate and inauthentic content online has never been more difficult than it is today. Echoing this sentiment, Microsoft’s president described deepfakes as the biggest AI-related threat.

How are Deepfakes Made?

At their core, deepfakes use a machine learning technique called deep learning. The first step is collecting extensive data on the target, such as images, audio or videos. This data goes into a Generative Adversarial Network (GAN), where two neural networks – the generator and the discriminator – compete in an iterative process. As the generator crafts counterfeit content, the discriminator critiques it. Over time, the generator refines its outputs, resulting in a startlingly realistic piece of fabricated media.

The Threat of Deepfakes to the Financial Industry

The financial industry thrives on trust. Over the years, institutions like banks, credit unions and fintechs have established strong relationships and credibility with their customers. But deepfakes pose a formidable risk to this trust.

Falling for scams based on deepfakes can have serious consequences. Businesses can lose millions of dollars and victims face both financial and reputational harm.

In 2020, fraudsters used an audio deepfake to steal $35 million from a Hong Kong bank. The audio was so convincing that the bank manager believed he was interacting with a known company director and transferred the funds.

While deepfake technology is still fairly new (first developed in 2017), deepfakes rank as one of the most dangerous AI crimes of the future.

How Do Fraudsters Use Deepfakes?

Deepfakes are not limited to falsifying videos of celebrities. They have become powerful tools in the hands of fraudsters targeting the financial sector. Two methods tend to form the basis of most deepfake scams.

Voice Phishing

Using AI, fraudsters can create alarmingly accurate vocal replicas. Imagine receiving a phone call from a voice you trust, perhaps a family member or a friend, urgently requesting financial aid. Those tactics can deceive individuals into making unauthorized funds transfers to fraudulent accounts.

Businesses are also vulnerable. This technique fooled one executive into transferring $243,000, convinced he was obeying orders from his boss.

Fake Video Scams

With social media platforms so ingrained in our daily lives, the danger of encountering deepfakes is escalating. Fraudsters can create realistic videos of people you know sharing false information or making deceitful requests. These scams manipulate unsuspecting users into taking harmful actions or disclosing sensitive information.

Deepfake-Driven Fraud Types

Account Takeover

Using deepfakes, cybercriminals can impersonate genuine users to seize control of their accounts. Once in control, they can change access settings and even exclude the real user entirely.

One dangerous form of this is “ghost fraud.” In these scams, fraudsters co-opt the personal data of a deceased individual to access their bank accounts, secure loans or credit, or even illegitimately claim pensions and other benefits. Gartner predicted that in 2023, deepfakes will play a role in 20% of successful account takeover attacks.

Application Fraud

Deepfakes can help bolster counterfeit bank account applications or loan requests. Fraudsters use deepfake technology to create seemingly authentic identities or manipulate real ones. In synthetic identity frauds, criminals merge real and fabricated identity details to create a fake identity. These tactics bear a staggering $6 billion cost to banks.

Transaction Fraud

Phishing attacks have been around for decades. But with deepfakes, phishing is evolving to an unprecedented level. Historically, phishing was limited to emails or text messages. With today’s sophistication of AI deepfakes, criminals can step these up by replicating voices on audio calls and faces in videos. This surge in realism and sophistication predicts a serious uptick in both the frequency and success rates of attacks.

How to Protect Customers from Deepfakes

  1. Education – Awareness is a powerful tool. By informing customers about the existence and dangers of deepfakes, FIs can empower them to be more cautious. Regular updates about the latest scam tactics can help them identify suspicious activities.
  2. Multi-factor Authentication – MFA requires users to provide two or more verification factors to gain access to an account. This could be a one-time code, phone number, or something unique to them like biometrics. Even if a fraudster’s deepfake manages to bypass basic security, it would still need to bypass these extra security layers.
  3. Biometric Verification – In the era of deepfakes, FIs should collaborate with identity verification providers for secure onboarding and digital authentication. Biometric verification matches a user’s features with trusted documents. Authentication validates returning users against stored biometrics.
  4. Behavioral and Device Intelligence – FIs can enhance security by using tools that analyze customer behavior and device characteristics. These tools detect anomalies in user interactions, signifying potential deepfake intrusions.
  5. Leverage AI-Driven Tools – Cutting-edge, AI-driven technologies can detect and prevent deepfake-based fraud. Using machine learning models and other AI tools, FIs can strengthen their real-time risk detection capabilities, further ensuring customer safety.

Mitigate Fraud Loss and Reputation Damage

To safeguard customers in this era of evolving threats, financial institutions must harness cutting-edge technology and deploy robust, scalable solutions to counteract deepfakes and other sophisticated attacks.

DataVisor provides a real-time, AI-powered fraud and risk management platform to help banks, credit unions and fintechs address a variety of fraud use cases including application fraud, account takeover and transaction fraud. Leveraging DataVisor’s advanced AI tools combined with device and behavioral intelligence, financial institutions can detect anomalies in their datasets in real-time and enable the immediate identification and prevention of fraudulent activities.

Explore how DataVisor’s platform does just that with best-in-class capabilities or book a time for a personalized demo with our team.