Artificial IntelligenceCybersecurityNewswireTechnology

AI to Drive 50% of Incident Response by 2028

▼ Summary

– Gartner warns that custom-built AI applications pose a major security risk, with half of enterprise incident response efforts by 2028 likely devoted to managing their security fallout.
– Security teams are advised to “shift left” and get involved early in AI projects to build in adequate controls from the start.
– AI-powered security tools are predicted to be used by half of organizations within two years to protect both third-party and custom AI apps from threats like prompt injection.
– By next year, nearly a third of organizations will demand comprehensive sovereignty of cloud security controls due to geopolitical risks and regulations, though this can slow innovation.
– Confidential computing is being promoted as a technology to achieve data sovereignty by creating secure processor-level enclaves to protect data in use.

The rapid adoption of custom-built artificial intelligence applications is creating significant new security challenges that will dominate incident response efforts within the next few years. According to a leading analyst firm, by 2028, at least 50% of enterprise incident response will be dedicated to managing security issues stemming from these bespoke AI systems. The core problem lies in the speed of deployment, with many complex tools being rolled out before thorough security testing is complete. This leaves organizations vulnerable as their security teams often lack established processes for handling AI-specific incidents, leading to prolonged and resource-intensive resolutions.

Security leaders are advised to “shift left” and integrate security controls from the earliest stages of AI project development. Proactive involvement is critical to building adequate safeguards into these dynamic and difficult-to-secure systems from the start. Failing to do so will result in mounting operational headaches as teams scramble to contain fallout from prompt injection attacks, data misuse, and other novel threats.

Interestingly, the same technology creating these problems is also seen as part of the solution. The analyst firm predicts that within two years, half of all organizations will deploy AI-powered security platforms specifically designed to protect their use of both third-party AI services and custom-built applications. These platforms help enforce acceptable use policies, monitor activity, and apply consistent security guardrails. Furthermore, the rise of AI-driven identity platforms is anticipated, aimed at improving the detection and remediation of risks associated with the explosion of machine identities, which now vastly outnumber human users and present a significantly higher risk profile, especially when over-permissioned.

Beyond AI, another major trend is reshaping the cloud security landscape: digital sovereignty. Geopolitical tensions and evolving local regulations are pushing nearly a third of organizations to demand comprehensive sovereignty over their cloud security controls by next year. Chief Information Security Officers are expected to play a pivotal role in defining these requirements. However, this shift presents its own dilemmas, as overly rigid sovereignty mandates can inadvertently stifle innovation. Research indicates that data sovereignty and privacy concerns are already the primary factors slowing AI projects in public cloud environments for a majority of companies.

A significant gap exists between the demand for sovereign guarantees and the current ability of providers to deliver them, highlighting a pressing need for stronger controls. To bridge this divide, technologies like confidential computing are being promoted. This approach creates secure, isolated enclaves at the processor level to protect data while it is being used, offering a path to achieve sovereignty requirements without compromising on security or operational flexibility.

(Source: NewsAPI Cybersecurity & Enterprise)

Topics

ai security 95% data sovereignty 85% custom ai 85% cloud security 80% incident response 80% machine identities 75% ai threats 75% security tools 75% shift left 70% geopolitical risk 70%