Cybersecurity Teams Unprepared for AI Attack Speed

▼ Summary
– Over half (56%) of IT and cybersecurity professionals are unsure how fast they could shut down AI systems compromised in a cyber-attack.
– There is significant confusion over accountability, with 20% of respondents not knowing who is responsible for managing enterprise AI applications.
– Less than half (43%) of security professionals have high confidence in their organization’s ability to investigate and explain a serious AI incident.
– A majority of organizations lack robust human oversight, with only 36% requiring pre-approval for most AI actions.
– An expert warns that organizations must establish proper governance and guardrails before adopting AI to use it responsibly and manage crises.
A significant majority of cybersecurity professionals are uncertain about their ability to respond to threats targeting artificial intelligence systems. New research reveals that 56% of IT and security experts do not know how quickly they could shut down an AI application compromised in a cyber-attack. This data, from a global survey of over 3,400 professionals conducted by ISACA, highlights a critical preparedness gap as organizations rapidly integrate AI. Only 32 percent of respondents believe they could contain a potentially compromised system within an hour, with a mere 7 percent estimating a response time longer than sixty minutes.
This uncertainty is compounded by widespread confusion over AI governance and ownership. The survey found that 20 percent of professionals are unsure who within their organization is accountable for managing enterprise AI applications. Responsibility is fragmented, with 28 percent pointing to board-level executives, 18 percent to the CIO or CTO, and 13 percent to the CISO. This lack of clear ownership creates substantial security and compliance risks, as accountability for incidents becomes blurred.
Regardless of where formal responsibility lies, confidence in handling a serious AI security incident is low. Fewer than half, just 43 percent, of the security professionals surveyed express high confidence in their organization’s ability to investigate a major AI breach and explain it to leadership or regulators. Over a quarter, 27 percent, admit to having little or no confidence in this capability. Experts link this vulnerability to a lack of human oversight in AI operations. Only 36 percent of organizations require human approval for most AI actions before they are executed. Another 26 percent review AI activity only after the fact, while 11 percent conduct reviews solely for flagged incidents. Alarmingly, 20 percent of respondents are unaware of what role humans play in overseeing their organization’s AI decisions.
This environment suggests many companies would struggle to even identify an AI-related security issue. Industry leaders stress that the drive to adopt AI must be matched with robust governance frameworks. Implementing proper guardrails involving the right people, policies, and response plans is not just about using AI responsibly, but is essential for mitigating major operational disruption when a crisis inevitably occurs.
(Source: Infosecurity Magazine)

