How to recognize and prevent deepfake scams

Deepfakes are a type of synthetic media created using AI and machine learning. In simple terms, they produce videos, images, audio, or text that look and sound real, even though the events depicted never actually happened.

how to prevent deepfake scams

These altered clips spread across social media, messaging apps, and video-sharing platforms, blurring the line between reality and fiction.

The term “deepfake” was coined in 2017 when a Reddit user created a subreddit with that name. This subreddit was used to share AI-generated videos, often featuring celebrity face-swaps in explicit content.

At first, people used deepfakes for entertainment and fun, but over time, they have become a dangerous tool in the hands of criminals for fraud, identity theft, blackmail, and spreading misinformation.

What’s even more concerning is that you no longer need advanced skills to create one.

The technology behind deepfakes

Deepfakes mostly rely on a technology called Generative Adversarial Networks (GANs). Essentially, GANs involve two algorithms working together.

The first one, the generator, creates the fake content—like a video or image. The second, called the discriminator, tries to figure out if the content is real or fake. These two algorithms keep pushing each other to improve: the generator gets better at making realistic-looking fakes, while the discriminator becomes more skilled at spotting imperfections.

Another technique that’s often used, especially for tasks like face-swapping, is autoencoders. Unlike GANs, autoencoders don’t work with a generator and discriminator. Instead, they focus on compressing a person’s facial features into a small, manageable format and then reconstructing them onto another person’s face. Even though this method doesn’t use GANs, it still creates convincing deepfakes, especially for simpler tasks like swapping faces.

With the availability of various AI tools, ranging from open-source software (DeepFaceLab, Faceswap) to mobile applications (Zao, Reface), making a deepfake now requires little more than a laptop or smartphone and the right software.

Cybercriminals weaponize deepfakes

According to an Entrust report, a deepfake attack occurred every five minutes in 2024. In one case, a deepfake video conference call, combined with social engineering tactics, led to a multinational company losing over $25 million.

These scams have had a significant impact on crypto companies, leading to an average loss of $440,000.

A recent scam that made headlines involved a French woman who was deceived by a scammer posing as actor Brad Pitt. The scammer used AI-generated images to convince her that they were in a romantic relationship. Over the course of 18 months, the woman transferred €830,000 to the fraudster, believing it was for a medical emergency involving Pitt.

Beyond financial harm, such schemes cause emotional distress, undermine trust in digital communication, and disrupt business operations.

Due to the complex geopolitical situation worldwide, this technology has also been used to spread disinformation, particularly in the political arena. These AI-generated videos and audio recordings can falsely depict candidates making controversial statements or engaging in damaging behavior, potentially swaying voters’ decisions. So they can have devastating effect on democracy.

How to spot deepfakes

Although deepfakes are continuously improving, they still have imperfections, allowing you to enhance your ability to detect them.

Facial movements: Look for unnatural blinking patterns or eyes that seem unnaturally static. Subtle delays or mismatched expressions in reaction to events might also be a giveaway.

Lighting and shadows: Check if the lighting on the subject’s face matches the surrounding environment. Inconsistencies—like shifting light or irregular shadows and reflections—may suggest manipulation.

Audio-visual sync: Ensure that mouth movements align naturally with the audio. If the lip-sync appears off or the sound has slight distortions, it could be a sign of editing.

Visual artifacts: Examine the edges where the face meets hair or background. Unusual blurring, warped borders, pixelation, or ghosting effects can indicate that the video has been altered.

How to mitigate deepfake risks

There are some preventative measures we can implement to protect ourselves:

Use deepfake detection tools and software: Take advantage of specialized tools and AI-driven software designed to detect deepfakes. These tools can analyze digital content for anomalies and help verify its authenticity.

Stay updated on deepfake trends and technologies: Keep yourself informed about the latest developments in deepfake technology. Being aware of new techniques and common signs of manipulation can help you spot deepfakes.

Implement MFA to verify identities: This can prevent attackers from gaining access to sensitive accounts, even if they manage to create a convincing deepfake of someone.

Establish code words or verification processes for sensitive communications: This helps ensure that the person you’re communicating with is actually who they claim to be. This is particularly useful when dealing with transactions or sensitive personal matters.

Limit the sharing of personal media online: The more personal media you share online, the more material you provide to threat actors for creating deepfakes. Be careful about what images, videos, and other media you share on social platforms. Adjust privacy settings so that only trusted individuals or groups can view them.

No one knows what the future holds for us, but what was impossible or available only to a select few 20 years ago is now accessible to everyone, and that also means it is accessible to people with bad intentions.

With the advancement of technology, it will become increasingly difficult to distinguish what is real from what is fake, and there is simply no magic wand with which we can protect ourselves. All we can do is exercise caution and take every measure within our power to safeguard ourselves. And, of course, don’t believe everything you see on the internet or social media.

Don't miss