Workers Rush to Build AI Apps, Ignoring Security Risks

▼ Summary
– Enterprise GenAI platform usage rose 50%, driven by employee demand for custom AI tools, with over 50% of adoption being unsanctioned “shadow AI.”
– GenAI platforms, the fastest-growing shadow AI category, pose security risks by connecting enterprise data to AI apps, increasing DLP and monitoring needs.
– On-premises GenAI deployment is rising, with 34% of organizations using LLM interfaces like Ollama, but security responsibility falls solely on the organization.
– SaaS GenAI app usage surged to 1,550 distinct apps, with enterprises averaging 15 apps and uploading 8.2 GB of data monthly, while ChatGPT adoption declined.
– Employee experimentation with AI tools is growing, with 39% of organizations using GitHub Copilot and 66% making API calls to OpenAI, highlighting agentic AI adoption.
Businesses are racing to adopt AI tools, but security concerns lag behind as employees bypass official channels to build custom applications. Recent data reveals a 50% surge in generative AI platform usage across enterprises, driven by teams eager to develop tailored AI solutions. However, over half of these implementations operate as shadow AI, unsanctioned projects that expose organizations to significant data risks.
The appeal of GenAI platforms lies in their ability to connect directly to corporate data stores, accelerating app development. While this boosts productivity, it also creates vulnerabilities. Network traffic linked to these tools jumped 73% in three months, with 41% of companies already using at least one platform. Microsoft Azure OpenAI leads adoption (29%), followed by Amazon Bedrock (22%) and Google Vertex AI (7.2%).
Security teams face a dilemma: balancing innovation with risk mitigation. “Organizations must track who’s building AI apps, where they’re deployed, and how data flows,” warns Ray Canzanese of Netskope Threat Labs. Proactive monitoring and updated data loss prevention (DLP) policies are critical as AI usage expands.
On-premises AI deployments add another layer of complexity. 34% of firms now use LLM interfaces, with Ollama emerging as the frontrunner. Employees are also experimenting with AI marketplaces like Hugging Face, accessed by 67% of organizations. The rise of AI agents compounds these trends, 39% of companies use GitHub Copilot, while 5.5% run on-premises agents built with popular frameworks.
SaaS-based GenAI tools continue proliferating, with 1,550 unique apps tracked, up from just 317 earlier this year. Enterprises now average 15 GenAI apps per organization, uploading 8.2 GB of data monthly. Purpose-built tools like Gemini and Copilot are gaining traction as security teams refine access controls. Notably, ChatGPT saw its first decline in enterprise use, while rivals like Anthropic Claude and Perplexity AI gained ground.
Even Grok, despite lingering security concerns, cracked the top 10 most-used apps as blockage rates fell. The trend underscores a broader shift: businesses are embracing AI’s potential but must prioritize granular controls to safeguard sensitive data amid rapid innovation.
(Source: HelpNet Security)