Artificial IntelligenceCybersecurityFintechNewswireWhat's Buzzing

Banks Unveil Strategy to Combat AI Identity Theft

▼ Summary

– Generative AI has drastically reduced the cost of deepfakes, leading to their routine use by criminals and state actors against financial institutions.
– Deepfake incidents in fintech surged 700% in 2023, with AI-enabled U.S. fraud losses projected to hit $40 billion by 2027.
– Attack methods include AI-generated phishing, synthetic identities, and real-time deepfakes, with AI automating phishing to cut costs by over 95%.
– Key policy recommendations include promoting phishing-resistant authentication like passkeys and expanding federal identity verification systems like eCBSV.
– The report calls for updated regulatory guidance, international standards coordination, and public education campaigns to address the infrastructure gaps enabling these threats.

The financial sector is facing an unprecedented wave of AI-driven identity fraud, compelling industry leaders to demand urgent policy action. A new report from a coalition of banking and identity security groups details the explosive growth of deepfake attacks and outlines a concrete strategy for policymakers. The analysis reveals that deepfake incidents targeting fintech firms surged by 700% in 2023, a trend that shows no sign of slowing. Deloitte projects that AI-enabled fraud losses in the U.S. could skyrocket to $40 billion by 2027, a staggering increase from $12.3 billion just a few years prior. This crisis is systemic, with identity-related issues accounting for 42% of all Suspicious Activity Reports filed by banks in 2021.

Criminals are leveraging generative AI across ten distinct attack vectors. These include real-time deepfake fraud used to bypass video verification, AI-generated phishing campaigns that are both cheaper and more effective, and the creation of synthetic identities. A primary driver of this surge is the automation of phishing. Large language models can now orchestrate entire phishing operations, slashing costs by over 95% while maintaining high success rates. Research indicates that 60% of people have already fallen victim to these AI-automated schemes. This technological shift exploits long-standing weaknesses in legacy authentication methods, such as SMS one-time passcodes and passwords, making large-scale attacks not just possible but profitable.

In response, the coalition has proposed a four-part policy initiative focused on achievable goals within a two-to-three-year timeframe. The first pillar centers on identity proofing and verification. Key recommendations include forming a Treasury-led task force to align digital and physical credentials and promoting mobile driver’s licenses that use public key cryptography. Because a deepfake cannot replicate possession of a private cryptographic key, these credentials offer strong resistance to AI spoofing. The plan also calls for expanding the Social Security Administration’s eCBSV system beyond credit checks to include account opening and background verifications, giving institutions a reliable government source for identity validation.

The second initiative pushes for widespread adoption of phishing-resistant authentication. Regulators are urged to guide financial institutions toward FIDO security keys and passkeys for both customer and internal systems. Jeremy Grant, coordinator of the Better Identity Coalition, noted that public awareness of passkeys has grown rapidly since their late 2023 rollout, but a significant knowledge gap remains. “Some people believe going passwordless makes them less secure, a view shaped by decades of guidance telling people to create strong, unique passwords,” Grant explained. “That has not been an effective cybersecurity tool for a long time now.” To combat this misconception, the report recommends a national public awareness campaign.

International coordination forms the third initiative, urging U. S. agencies like NIST and Treasury to engage with global partners on digital wallet interoperability and standards. This is seen as a strategic necessity, as adversarial nations are actively shaping these international standards while U. S. participation is often limited by resources. The final initiative focuses on public education, including specific campaigns to inform consumers about deepfake threats and the benefits of phishing-resistant tools like passkeys.

A significant hurdle is the current regulatory gap. Financial institutions must navigate older rules like the Bank Secrecy Act alongside FFIEC guidance, neither of which fully addresses modern credential technologies. Updated regulatory guidance is essential to give banks confidence in deploying new defenses while remaining compliant. Grant emphasized that the threat transcends finance. “Deepfakes are not a sector-specific problem but a national problem,” he stated. “It’s the same organized criminals exploiting the same core deficiencies to steal from banks, fintechs, health, retailers, and government.”

The report highlights four recommendations with the broadest potential impact: a state infrastructure grant program tied to NIST standards, expanding eCBSV access, accelerating NIST’s guidance on liveness detection technology, and creating a multi-agency task force to monitor AI-driven identity threats. Legislative efforts, such as the proposed Stop Identity Fraud and Identity Theft Act of 2026, which would fund security grants, are also seen as promising steps to build a more resilient identity ecosystem across all sectors.

(Source: Help Net Security)

Topics

deepfake threats 95% ai-enabled fraud 93% identity verification 92% phishing-resistant authentication 90% policy recommendations 89% regulatory guidance 88% Public Awareness Campaigns 85% international coordination 83% synthetic identity creation 82% legacy authentication vulnerabilities 80%