Artificial IntelligenceBusinessCybersecurityNewswire

Enterprise Data’s Fate in the Age of Ubiquitous GenAI

Originally published on: December 25, 2025
▼ Summary

– Generative AI is rapidly spreading across enterprises, creating new security risks by increasing data exposure and outpacing existing security policies and controls.
– The use of GenAI tools is leading to significant data exposure, with a large percentage of uploaded files and prompts containing sensitive information like source code and customer records.
– AI-powered threats, such as sophisticated fraud and deepfakes, are evolving faster than defenses, with most organizations’ safeguards failing during major incidents.
– There is a major disconnect between AI adoption and security readiness, as employee use surges but formal policies, testing, and investment in specific defenses like deepfake detection lag significantly.
– Organizations are sharing exponentially more data with AI applications, dramatically increasing the risk of breaches, while concerns about data privacy, trust in outputs, and unresolved application vulnerabilities remain widespread.

The rapid integration of generative AI into business operations is fundamentally altering data security landscapes, creating both unprecedented opportunities and significant new vulnerabilities. As these tools weave into daily workflows, from marketing to customer service, security teams face the immense challenge of tracking where sensitive information travels, who can access it, and how traditional security models are being upended. This shift is accelerating data exposure, introducing novel threats, and frequently outpacing the policies and controls organizations have in place.

A recent analysis reveals the scale of the problem, with sensitive data proliferating through unstructured files, duplicates, and risky sharing habits. The adoption of tools like Microsoft Copilot adds further complexity, layering new risks on top of persistent issues like oversharing. While AI promises efficiency and innovation, it simultaneously introduces security risks many companies are unprepared to handle.

The threat landscape has evolved dramatically. Generative AI has made fraud faster, cheaper, and more difficult to detect, enabling sophisticated attack sequences that blend spoofed logins, vendor impersonation, and deepfakes to mimic legitimate workflows. Traditional defenses, often siloed to single systems, are failing against these multi-platform assaults. Manual checks and email filters are insufficient, with nearly 90% of organizations reporting at least one critical security control failed during a major incident.

Employee demand is fueling a 50% surge in the use of GenAI platforms, as staff seek to build custom applications. Despite a move toward sanctioned SaaS AI tools, the problem of shadow AI, unsanctioned applications used by employees, remains rampant. Estimates suggest over half of all current AI app adoption falls into this risky category, compounding potential security gaps.

Concrete data underscores the exposure. An examination of over a million GenAI prompts and thousands of uploaded files found that 22% of files and 4.37% of prompts contained sensitive information. This includes source code, credentials, proprietary algorithms, merger documents, and confidential financial records, precisely the data security leaders fear is leaking but struggle to quantify.

A significant policy gap exacerbates the risk. In Europe, nearly three-quarters of IT professionals report staff using generative AI, yet only about a third of organizations have established formal governance policies. While 63% express high concern about AI being weaponized against them and 71% anticipate a rise in convincing deepfakes, investment in defense lags. A mere 18% are allocating budget to deepfake-detection tools, creating a dangerous security deficit as AI-powered threats advance.

Testing and remediation efforts are not matching the perceived risk. Only 66% of organizations regularly test their GenAI-powered products, leaving a substantial portion vulnerable. Almost half of security professionals believe a “strategic pause” is necessary to rebuild defenses, but the relentless pace of adoption offers no such respite.

The scope of data sharing is staggering. The volume of data businesses share with GenAI apps has multiplied thirtyfold in just one year. The average organization now transmits over 7.7GB monthly to AI tools, a colossal increase from 250MB. This data often includes source code, regulated information, passwords, and intellectual property, dramatically raising the stakes for breaches and compliance failures. With 75% of enterprise users now accessing applications with AI features, security teams must also contend with the rise of the unintentional insider threat.

In sectors like financial services, the tension between opportunity and risk is acute. GenAI can unlock insights from vast unstructured datasets, improving operations and detecting fraud. However, widespread hesitation persists due to fears that sensitive data could be inadvertently used to train public AI models. While most employee interactions with AI are benign, such as requesting text summaries or code documentation, about 8.5% of prompts are problematic, risking exposure of confidential information.

Supply chain operations are also embracing AI, with 97% of leaders using some form of GenAI. Yet only a third employ tools built for supply chain tasks, and 43% worry about how their data is used or shared. Another 40% simply do not trust the accuracy of the AI’s outputs, highlighting a crisis of confidence alongside the security concerns.

Technical vulnerabilities present another front. A vast majority of firms have penetration-tested their GenAI web applications in the past year, with 32% of tests uncovering serious flaws. Alarmingly, only 21% of those critical vulnerabilities were subsequently remediated, leaving risks like prompt injection, model manipulation, and data leakage wide open.

Ultimately, 70% of organizations cite the blistering pace of AI development as their top security concern, ahead of data integrity and trustworthiness. While a third of businesses are already integrating or being transformed by this technology, the defensive frameworks are struggling to keep up. Generative AI has undoubtedly empowered malicious actors, making them more efficient at crafting phishing campaigns and scams. For now, it hasn’t made them inherently smarter, but it has irrevocably changed the battlefield, demanding a proactive and comprehensive rethink of enterprise data protection.

(Source: HelpNet Security)

Topics

data exposure 95% Security Risks 93% AI Adoption 90% data sharing 88% shadow ai 85% insider threats 83% policy gaps 82% fraud evolution 80% vulnerability testing 78% defense inadequacy 77%