AI Social Engineering: Top Cyber Threat by 2026, ISACA Finds

▼ Summary
– AI-driven social engineering is projected to be the top cyber threat in 2026, according to an ISACA report published on October 20, 2025.
– 63% of surveyed IT and cybersecurity professionals view AI-driven social engineering as a major challenge, surpassing ransomware and supply chain attacks.
– Only 13% of organizations feel very prepared to manage generative AI risks, with many still developing governance and training.
– AI and machine learning are top technology priorities for 2026, with 62% of respondents planning further investment.
– The EU is leading in AI compliance, with the AI Act expected to provide clarity for companies operating there.
A recent analysis from ISACA identifies AI-driven social engineering as the foremost cybersecurity danger anticipated for 2026. This emerging threat leverages artificial intelligence to craft highly convincing and personalized deceptive communications, making it exceptionally difficult for individuals and security systems to detect. According to the survey, which gathered insights from 3,000 technology and cybersecurity experts, a striking 63 percent view this AI-powered method as a primary concern. This marks the first instance where AI social engineering has claimed the top position, moving ahead of persistent dangers like ransomware and extortion, noted by 54 percent of participants, and supply chain attacks, highlighted by 35 percent.
The research underscores a dual perspective on artificial intelligence within the professional community. While many acknowledge the significant opportunities AI presents, there is also a widespread recognition of the novel risks it introduces, for which most feel underprepared. Only 13 percent of organizations described themselves as “very prepared” to handle risks associated with generative AI. Half of the respondents indicated they feel “somewhat prepared,” while a quarter admitted they are “not very prepared” at all. The report points out that the majority of professionals are still in the process of establishing governance frameworks, developing policies, and organizing training programs, resulting in considerable security vulnerabilities.
Looking forward, a strong majority of those surveyed see further investment in artificial intelligence as essential. Specifically, 62 percent identified AI and machine learning as leading technological priorities for the year 2026. This reflects a strategic push to harness AI’s benefits while simultaneously building defenses against its malicious applications.
On the regulatory front, many professionals believe that clearer rules, particularly those focused on AI safety and security, could help close the current preparedness gap. Karen Heslop, ISACA’s Vice President of Content Development, commented during a recent press event that the European Union is at the forefront of technology compliance, including standards for cybersecurity and AI security. She expressed general support for the EU’s AI Act, suggesting it may provide much-needed compliance clarity for businesses operating within the EU, even as she described the broader U.S. regulatory environment as a potential “compliance nightmare.”
(Source: InfoSecurity Magazine)





