Spotting AI-generated scams: Red flags to watch for

In this Help Net Security interview, Andrius Popovas, Chief Risk Officer at Mano Bank, discusses the most prevalent AI-driven fraud schemes, such as phishing attacks and deepfakes. He explains how AI manipulates videos and audio to deceive victims and highlights key red flags to watch for.

Popovas also outlines strategies for professionals to stay ahead of these scams and the role of governments in combating AI fraud.

AI fraud

What types of AI-driven fraud schemes are most prevalent today?

From our practice, the most common are phishing attacks, when AI generates very realistic phishing emails or other messages (SMS, Viber, WhatsApp, etc.) that mimic actual companies, tricking users into revealing sensitive information like passwords or credit card numbers.

There is also an increasing number of incidents with deepfakes. These are AI-generated videos or audio recordings that can impersonate real people and successfully access sensitive or personal information. I must also mention social engineering and account takeovers.

AI tools can analyze social media profiles to craft personalized scams. AI can also assist fraudsters in guessing or cracking account passwords, allowing them to take control of user accounts for Loan and Credit Application Fraud. By automating data input and using stolen or synthetic identities, fraudsters can submit fraudulent applications for loans or credit.

Can you elaborate on how AI manipulates videos and audio to deceive victims and what specific red flags people should look for?

AI manipulation of videos and audio, commonly known as deepfakes, utilizes machine learning algorithms to create realistic but altered content. AI algorithms can swap faces in videos, making it appear as though someone else is saying or doing something they never actually did, often for malicious purposes. Deepfake audio technology can replicate a person’s voice by analyzing existing recordings. This allows the creation of audio that sounds like the specific individual, which can be used in scams or to manipulate conversations.

Therefore, if you receive a video or audio call, you need to pay attention to these red flags: strange facial movements (Look for unnatural expressions, mismatched lip movements, or awkward eye movement), inconsistent lighting (Natural videos typically have consistent lighting across a scene. If lighting looks off, with harsh contrasts or mismatched shadows, it could be a sign of manipulation), blurriness (Edges, where the face is swapped, may appear blurred, distorted, or pixelated, especially if the transition between the fake and real parts is not smooth), audio anomalies (If the voice sounds robotic, lacks emotion, or doesn’t match the speaking style of the person being imitated, it could be a fake). Being aware of these signs can help individuals avoid falling victim to deepfake scams or misinformation.

Always approach unfamiliar videos and audio with a critical eye, especially if they involve sensitive or controversial topics.

How can professionals in the financial and cybersecurity sectors stay ahead of AI-based scams?

Here are several effective approaches:

1. Ensure all staff members are trained to recognize phishing attempts, deepfakes, and other AI-driven scams.
2. Implement AI-driven security solutions that can detect anomalies and identify potential fraudulent activities in real-time.
3. Use machine learning to analyze transaction patterns and flag suspicious behavior indicative of fraud.
4. Work with industry peers and cybersecurity organizations to share information about new threats, tactics, and effective countermeasures.
5. Conduct routine penetration testing and security audits to identify weaknesses in systems and processes.

What role do governments and international organizations play in combating AI fraud? Are any international standards or frameworks currently in place to address this growing issue?

Governments and international organizations play a crucial role in combating AI fraud through several key functions: regulation, policy formulation, enforcement, and international cooperation. We currently use these frameworks and standards:

OECD AI principles: The OECD has set out recommendations for AI that emphasize the importance of transparency and accountability to mitigate risks, including fraud.

European Union AI Act: The EU is working on regulatory frameworks, including the AI Act, which seeks to regulate the development and deployment of AI to ensure safety and mitigate risks, including those that could lead to fraud.

NIST AI Risk Management Framework: The National Institute of Standards and Technology (NIST) in the U.S. is developing frameworks that address risks associated with AI, including fraud, and outlines clear standards and practices.

Are there any emerging technologies or innovations that could help mitigate the risks posed by AI fraud?

Maybe blockchain technology: The decentralized nature of blockchain can help validate the authenticity of transactions and data, making it harder for fraudulent activities to go unnoticed. Smart contracts can automate processes and ensure compliance, reducing the potential for fraud.

Don't miss