Artificial IntelligenceCybersecurityNewswireTechnology

Nudge Security Adds AI Governance to Its Platform

▼ Summary

– Nudge Security expanded its platform to help organizations manage AI data security risks while enabling workforce AI use.
– New capabilities include monitoring AI conversations for sensitive data and enforcing usage policies via the browser.
– The platform provides AI usage monitoring, detects risky data-sharing integrations, and summarizes vendor data policies.
– AI security risks are widespread, with organizations using many AI tools and facing complex integration and data access challenges.
– The solution focuses on visibility and control across the SaaS ecosystem, engaging employees as active participants in governance.

Businesses are rapidly adopting artificial intelligence tools, but this surge introduces significant data security challenges that require new forms of governance. Nudge Security has expanded its platform with specific features designed to help organizations mitigate AI-related risks while still enabling productive workforce use of these technologies. The update introduces several key capabilities to address the complex security landscape created by ubiquitous AI applications.

A central new feature is AI conversation monitoring, which detects sensitive information shared through file uploads or direct chats with popular AI chatbots. This monitoring covers platforms like ChatGPT, Gemini, Microsoft Copilot, and Perplexity. To guide employee behavior in real-time, the platform offers policy enforcement directly via the web browser. This system delivers guardrails and educational prompts as staff interact with AI tools, helping to enforce the company’s acceptable use policy as actions are taken.

For broader oversight, AI usage monitoring provides visibility into trends across the organization. Security teams can see Daily Active Users broken down by department, individual, and specific AI applications, whether those tools are officially approved or unsanctioned. This allows for quick responses to both business needs and potential threats. Another critical function is risky integration detection, which automatically surfaces data-sharing connections. It identifies OAuth grants and API permissions that may provide AI tools with ongoing access to sensitive corporate information.

To aid in vendor assessment, the platform generates condensed data training policy summaries. These summaries clarify how each AI or SaaS vendor uses, retains, and handles customer data. Finally, automated playbooks help scale ongoing governance by simplifying workflows. These include tracking policy acknowledgements, revoking risky data permissions, and orchestrating account removals.

These enhancements build upon the AI security functions Nudge Security has offered since 2023, such as initial discovery of all AI apps and users, visibility into AI dependencies within the SaaS supply chain, and security profiles for thousands of providers. The goal is to help customers advance their AI initiatives without compromising on security controls or compliance.

The need for such governance is underscored by data from Nudge Security’s own findings. Organizations are now navigating a landscape with over 1,500 unique AI tools discovered across their client base, averaging 39 distinct tools per company. More than half of all SaaS applications list a major large language model provider in their data subprocessor agreements. Furthermore, the average employee has approximately 70 OAuth grants, many of which enable extensive data sharing.

This reflects a fundamental shift in the risk environment. Security challenges now extend across the entire SaaS ecosystem, not just standalone AI tools. Risks emerge wherever AI interacts with software-as-a-service, from built-in AI features in productivity apps to integrations that create direct pipelines to AI models. A single OAuth grant can provide an AI vendor with continuous, often overlooked, access to an organization’s most critical data.

The platform’s design engages employees as active participants in security. By delivering guardrails at the precise moment a decision is made, the point of risk, it aims to build a more sustainable and human-centric governance model. This approach provides the comprehensive visibility and control needed to manage the intricate web of integrations and access pathways that define the modern, AI-augmented workplace.

(Source: HelpNet Security)

Topics

ai security 100% data monitoring 95% ai governance 90% policy enforcement 90% saas ecosystem 85% oauth grants 85% AI Tools 80% risk detection 80% compliance management 75% workforce engagement 70%