40% of Firms Face Shadow AI Security Threats, Gartner Warns

â–Ľ Summary
– By 2030, over 40% of global organizations will face security and compliance incidents from unauthorized AI tools, according to Gartner.
– A survey found 69% of cybersecurity leaders suspect or have evidence of employees using public generative AI at work, risking IP loss and data exposure.
– Gartner predicts 50% of enterprises will experience delayed AI upgrades and rising maintenance costs by 2030 due to unmanaged technical debt from GenAI usage.
– To mitigate risks, experts recommend establishing clear AI usage policies, conducting audits, and tracking technical debt metrics in IT dashboards.
– Organizations should prioritize open standards and modular architectures to avoid vendor lock-in and preserve essential human skills when implementing AI solutions.
A significant number of companies worldwide are projected to encounter serious security and compliance breaches by 2030, directly linked to employees using unauthorized artificial intelligence applications. Gartner’s latest forecast indicates that more than 40% of global enterprises will be affected, underscoring a widespread challenge many IT leaders already recognize. Recent surveys of cybersecurity professionals show that nearly 70% either have proof or strong suspicions that their teams are accessing public generative AI platforms during work hours.
These unsanctioned tools introduce considerable dangers, including the potential for intellectual property theft and accidental data leaks. High-profile cases, such as Samsung’s internal ban on generative AI after employees shared proprietary source code and confidential meeting notes with ChatGPT, highlight how real these threats are. To counter such risks, organizations are advised to implement comprehensive, company-wide AI usage policies, perform periodic audits to detect unsanctioned AI activity, and integrate generative AI risk assessments into their standard software evaluation procedures.
Gartner’s analysis is supported by other industry studies. Last year, research from Strategy Insights found that more than a third of organizations across the US, UK, Germany, and Nordic and Benelux countries struggled to monitor and control unauthorized AI use. In the same period, RiverSafe reported that one in five UK businesses experienced exposure of sensitive corporate information due to staff using generative AI tools. More recently, 1Password shared data showing 27% of employees admitted to using AI applications not approved by their employers.
![Image: A business professional looking at a laptop screen with warning symbols related to data security.]
Even officially approved AI deployments carry their own complications. Gartner anticipates that half of all enterprises will encounter postponed AI system upgrades and increased maintenance expenses by 2030, stemming from unaddressed technical debt linked to generative AI. Delays in applying updates can introduce security vulnerabilities if not carefully managed. While businesses are often drawn to the rapid development capabilities of generative AI, the long-term expenses tied to maintaining, correcting, or replacing AI-generated outputs, such as software code, marketing content, or design elements, can significantly reduce the expected return on investment.
Establishing clear internal standards for reviewing and documenting AI-created materials, along with tracking technical debt via IT management dashboards, allows companies to mitigate disruptions and control costs. Over-reliance on generative AI also raises concerns about vendor lock-in and the gradual decline of in-house expertise. To preserve critical organizational knowledge and skills, companies should identify areas where human oversight and specialized craftsmanship remain irreplaceable, ensuring AI tools are used to support, not supplant, these capabilities. Adopting open standards, open APIs, and modular system designs when building AI infrastructure can further help organizations avoid becoming overly dependent on any single technology provider.
(Source: InfoSecurity Magazine)
