Nearly Half of Workers Share Confidential Data with AI

▼ Summary
– AI adoption is rapidly increasing with 65% of people now using it daily, but 58% receive no workplace training about its security risks.
– Users frequently share sensitive data with AI tools, with 43% admitting to inputting confidential company or client information.
– AI agents and chatbots present significant security threats including potential data leaks, system access vulnerabilities, and database manipulation risks.
– Generative AI has become ubiquitous as companies integrate it into workplace tools like Microsoft Office, often without user choice.
– Most interactions with AI chatbots become training data rather than remaining private, as demonstrated by Samsung’s 2023 internal data leak incident.
A significant gap is emerging between the rapid adoption of generative artificial intelligence and the essential training needed for its secure use. A new international study reveals that while over 65% of individuals now use AI daily, a staggering 58% have received no workplace instruction on the associated data security and privacy dangers. This disconnect is creating substantial risks for organizations worldwide.
The research, conducted by the National Cybersecurity Alliance and CybSafe, surveyed more than 6,500 people across seven countries. It found a 21% year-over-year increase in daily AI usage. Lisa Plaggemier, Executive Director at the NCA, noted that people are embracing the technology in their personal and professional lives much faster than they are learning about its potential pitfalls.
Perhaps the most alarming finding is that 43% of workers admitted to sharing sensitive company information, including confidential financial records and client data, during their conversations with AI tools. These numbers paint a clear picture: the deployment of AI is surging ahead, while comprehensive safety training programs are struggling to keep pace.
This study adds detail to a worrying trend that has been developing for months. As AI integration deepens, so does the understanding of its inherent security vulnerabilities. A separate survey from earlier in the year found that an overwhelming 96% of IT professionals view AI agents as a security risk. Despite this concern, 84% of those same professionals reported that their employers were already deploying the technology internally.
AI agents, designed to automate complex tasks and save time, present novel dangers. To function, these systems often require access to an organization’s internal documents and digital tools, which dramatically increases the potential for data leaks. The risks are not merely theoretical; coding agents have been exploited by malicious hackers, and in one notorious case, an agent was responsible for deleting a company’s entire database.
Even conventional chatbots carry significant risks. Beyond their well-documented tendency to “hallucinate” and produce false information, it is crucial to remember that most user interactions are fed back into the system as training data. This means conversations are not truly private. Samsung engineers learned this lesson the hard way in 2023 when they accidentally leaked proprietary information to ChatGPT, leading the company to ban its use.
For many, the choice to use generative AI was not a conscious decision. The technology is increasingly being integrated directly into the digital tools people use every day, particularly in workplace software. Microsoft, for instance, recently announced the integration of AI agents into its core Office applications like Word, Excel, and PowerPoint. When this forced adoption is combined with a lack of security training, it creates a perfect storm for individuals and businesses seeking efficiency.
Virtually every major software company has been racing to develop its own generative AI product, fueled by mainstream excitement and the promise of future profits. The market has become so saturated that some companies are now creating AI tools specifically designed to manage other AI systems. This rapid, often unregulated, proliferation underscores the urgent need for structured education and clear security protocols to protect sensitive information in the age of intelligent automation.
(Source: ZDNET)





