AI & TechArtificial IntelligenceCybersecurityNewswireTechnology

Shadow AI: New Strategies to Solve an Old Problem

▼ Summary

– Shadow AI is the second-most common form of shadow IT, with 27% of surveyed workers using unapproved AI applications.
– Over a third of employees follow company AI policies most of the time, and many are unaware such policies exist, indicating gaps in communication and enforcement.
– 1Password advises companies to anticipate AI risks by implementing continuous monitoring, clear policies, and secure alternatives to unauthorized tools.
– Eye Security introduced “Prompt Injection for Good,” an open-source tool that embeds compliance warnings in documents to deter unsafe AI use with company data.
– The tool is a prototype that requires further development and doesn’t prevent data copying, but aims to inspire proactive data protection using ethical prompt injection.

A recent study from 1Password highlights that Shadow AI has become the second most widespread type of shadow IT within corporate settings. Surveying more than 5,000 IT, security, and general employees across the US, UK, Europe, Canada, and Singapore, the report uncovers that 27% of respondents use artificial intelligence applications not purchased or approved by their employer. Additionally, 37% follow their company’s AI policies only “most of the time.” According to 1Password, these figures indicate that many organizations lack well-defined AI usage policies and the enforcement mechanisms to back them up. A notable portion of workers are not even aware that their company has an AI policy in place.

Businesses are being urged to shift from a reactive to a proactive stance regarding AI risks. This involves implementing ongoing monitoring for unauthorized AI tools and developing the capability to block them before they cause harm. Security and IT departments should create straightforward, practical AI usage policies and ensure every employee is both aware of and understands them. When employees are found using unsanctioned tools, the organization should investigate the reasons behind this behavior and provide secure alternatives that fulfill the same requirements. It is also critical to design access and security controls with future AI developments, including agentic AI, in mind. Finally, companies are advised to adapt offensive cybersecurity techniques for defensive purposes to counter Shadow AI effectively.

An innovative approach to improving employee awareness and encouraging compliant behavior comes from the Dutch security firm Eye Security. Their “Prompt Injection for Good” is an open-source prototype tool that lets organizations embed specific prompts into company documents and email signatures. These prompts trigger compliance warnings when employees attempt to use personal AI tools with corporate data. Eye Security’s Chief Technology Officer, Piet Kerkhofs, explained that the goal is for end-users to receive a clear disclaimer, written by the company’s CISO, outlining the risks and consequences of their actions. Ideally, this will make employees reconsider before uploading sensitive documents to unapproved AI platforms again.

The framework supports regular testing of these defensive prompts against the latest large language models. It integrates with popular AI platforms and allows bulk testing of embedded prompts in corporate documents, providing a scoring overview for evaluation. As LLMs and their protective guardrails evolve, this tool helps defenders continuously test and adjust their prompts. Kerkhofs emphasized that the tool demonstrates the concept of ethical prompt injection, which can be broadly applied within data loss prevention solutions by embedding specific payloads into corporate documents. He expressed hope that vendors will experiment with this approach and eventually implement it in production environments, which is why the company shared its research and source code publicly.

Eye Security acknowledges that their solution is not perfect. Kerkhofs pointed out that the tool does not prevent employees from copying and pasting sensitive data into unsanctioned LLMs. Governing that type of activity would require browser extensions that monitor AI usage within the browser, raising obvious privacy concerns. Despite these limitations, the company hopes this prototype will inspire organizations to consider using this typically offensive hacking technique for proactive data protection and spur the development of even more effective solutions in the future.

(Source: HelpNet Security)

Topics

shadow ai 95% prompt injection 90% ai policies 90% employee awareness 85% employee compliance 85% data protection 80% risk anticipation 80% ethical hacking 75% continuous monitoring 75% policy enforcement 75%