Mitek Digital Fraud Defender combats AI generated fraud
Mitek announced Digital Fraud Defender (DFD), an advanced, multi-layered solution to safeguard digital identity verification processes against sophisticated AI-enabled fraud tactics.
Designed for financial institutions, fintech, online gaming providers, and enterprises requiring remote identity verification, the new suite addresses the urgent and growing challenges posed by generative AI’s potential to create highly realistic fake images, documents, and videos.
According to Deloitte’s Center for Financial Services, fraud losses from generative AI could surge from $12.3 billion in 2023 to a staggering $40 billion by 2027. As incidents of deepfake fraud alone are projected to increase by 700% by 2031, DFD is the first holistic suite of defenses designed to protect organizations and their customers from these risks.
“Too many organizations, and identity vendors, are taking a one-off approach to addressing fraud vectors. They try to react to attacks one at a time,” said Chris Briggs, Mitek’s CPO. “Digital Fraud Defender was designed to fight AI with AI and enable our customers to outpace modern frauds with a holistic suite of tools.”
Future-proofing identity verification
As digital interactions grow, DFD equips organizations with the tools to counter advanced fraud techniques while maintaining robust identity verification processes. By leveraging DFD capabilities, clients mitigate risks, ensure compliance, and maintain customer trust in a rapidly changing digital fraud landscape.
The Digital Fraud Defender solution
DFD, now available within the Mitek Verified Identity Platform, combines Mitek’s proprietary liveness technology with a multi-layered approach to safeguarding IDV processes against AI-based digital fraud. This unique solution goes beyond surface-level detection by analyzing both content and channels for signs of manipulation, ensuring robust protection across several key fraud vectors:
- Injection attack detection – Guards against the insertion of manipulated or fake content during verification by analyzing both the content itself and the transmission channel for anomalies.
- Template attack detection – Identifies the use of fraudulent document templates, even when paired with genuine personal information, by spotting patterns and frequency of recurrence, as well as comparing against known fraud profiles.
- Deepfake attack detection – Utilizes passive liveness detection to examine video and audio for artifacts unique to AI-generated content, identifying even subtle signs of manipulation.