Two-Thirds of Firms Hit by Deepfake Attacks

▼ Summary
– 62% of organizations experienced a deepfake attack in the past year, primarily through social engineering or by exploiting automated verification systems.
– A senior Gartner director warns that deepfake threats are growing due to continuous technological improvements.
– Organizations are urged to integrate emerging deepfake detection tools into platforms like Microsoft Teams or Zoom, though their effectiveness is still being evaluated.
– Effective defenses include employee awareness training with simulated deepfakes and implementing application-level authorization, like phishing-resistant MFA, for financial transactions.
– 32% of organizations have also faced attacks on AI applications, including prompt injection attacks that manipulate AI into generating malicious outputs.
A startling new survey reveals that a majority of organizations, 62%, have fallen victim to a deepfake attack within the last year. These sophisticated assaults typically involve social engineering, where attackers impersonate company leaders through fabricated video or audio during calls with staff. Another common method is the manipulation of automated verification systems, such as those relying on face or voice recognition.
According to Akif Khan, a senior director at Gartner Research, the threat landscape is rapidly evolving. He warns that as the underlying technology becomes more accessible and convincing, the frequency and severity of these attacks are poised to increase significantly. The most widespread danger currently stems from the fusion of deepfakes with classic social engineering tactics. A typical scenario might involve a deepfake of a CEO urgently instructing an employee to wire funds to a fraudulent account.
Khan emphasizes that this combination is particularly insidious. “Social engineering has always been a reliable tool for attackers. Adding a highly realistic deepfake to the mix puts immense pressure on employees, who become the first line of defense,” he explained. Relying solely on automated systems is no longer sufficient; human vigilance is critical.
To combat this rising threat, Khan points to emerging technical solutions. He suggests that organizations evaluate vendors who are integrating deepfake detection capabilities directly into everyday communication platforms like Microsoft Teams and Zoom. However, he offers a note of caution, stating that these integrations are still novel. “There aren’t many large-scale deployments yet, so their real-world effectiveness in a live environment remains to be fully proven,” Khan noted.
For more immediate protection, some companies are finding success with specialized awareness training. These programs often involve creating their own deepfakes of executives and using them in simulated attack exercises to educate employees on what to look for. Another crucial defensive measure is a thorough review of internal business processes, especially concerning financial transactions.
Khan advises implementing authorization checks at the application level. “A process could be designed so that even if the CFO calls to request a payment, the transaction must still be formally approved within the finance application itself,” he said. This secondary step should ideally require the executive to log in using phishing-resistant multi-factor authentication (MFA) to authorize the transfer, creating a vital barrier.
The same Gartner report, presented at the Gartner Security & Risk Management Summit 2025, highlighted another concerning trend: attacks targeting artificial intelligence applications. It found that 32% of organizations have experienced an attack on their AI systems over the past twelve months. These incidents often involve prompt injection attacks, a technique where malicious inputs are crafted to manipulate large language models (LLMs) into producing biased, incorrect, or harmful outputs.
(Source: Info Security)
