Upwind Integrates Real-Time AI Security into CNAPP Platform

▼ Summary
– Upwind has launched an integrated AI security suite that expands its CNAPP to protect the enterprise AI attack surface with real-time security and posture management.
– The suite addresses the security challenges of rapid AI adoption by providing visibility into AI behavior and data flows, which traditional security lacks.
– Its approach is “inside-out” and runtime-first, using real-time signals and internal workload observation instead of static configurations to ground AI risk in actual activity.
– Key capabilities include AI Security Posture Management, AI Detection & Response, and an AI Bill of Materials for comprehensive inventory and monitoring.
– The platform also provides AI Network Visibility, MCP Security for tracing agent actions, and AI Security Testing against threats like prompt injection, unifying cloud and AI risk management.
Upwind has significantly enhanced its Cloud-Native Application Protection Platform (CNAPP) by embedding a comprehensive real-time AI security suite. This strategic integration moves beyond treating artificial intelligence as a siloed concern, instead weaving AI posture management, runtime protection, and agent monitoring directly into the existing fabric of cloud security. The expansion leverages Upwind’s deep contextual awareness of data flows, identity, APIs, and cloud infrastructure to provide a unified defense for the modern enterprise’s AI attack surface.
According to company CEO Amiram Shachar, isolating AI security is a flawed strategy. He argues that it must function as an integral component of a broader security ecosystem. This approach ensures AI protections can immediately benefit from the rich data and operational context already aggregated by the CNAPP platform, creating a more cohesive and effective security posture.
The breakneck pace of AI adoption has introduced complex security challenges that traditional tools struggle to address. Models, agents, and data pipelines now operate across diverse services and infrastructures, creating a dynamic and ephemeral environment. Security teams frequently find themselves unable to trace AI behavior, validate configurations, or assess the real-world impact of AI-driven decisions. This visibility gap represents a significant risk, as conventional security methods lack the necessary shared context and live evidence to be effective.
Securing these next-generation workloads demands a fundamental shift in perspective. It requires a focus on real-time signals, API interactions, and Layer 7 visibility—principles at the core of Upwind’s inside-out security methodology. Rather than depending on static configurations or periodic snapshots, this model observes actual traffic, data movements, and behavioral patterns from within the workload during execution. Security is therefore based on observable reality, not on assumptions.
By applying this runtime-first model to AI, Upwind grounds risk assessment in live activity. This provides security teams with an accurate, prioritized view of what is truly occurring when it matters most. The new capabilities deliver critical visibility into where AI systems are operating, how models and agents behave in real time, and what sensitive data they access or transmit.
The integrated suite encompasses several key functionalities designed to strengthen AI security across the entire stack:
- AI Security Posture Management (AI-SPM) focuses on securing exposed inference endpoints, enforcing model governance, tightening overly permissive IAM roles, and detecting leaked API keys. It correlates configuration issues with actual runtime activity to highlight the most critical risks.
- AI Detection & Response (AI-DR) monitors agents and LLM infrastructure for anomalies and jailbreak attempts. Through deep analysis of network activity, processes, and prompt payloads across multiple layers, it enables teams to identify and respond to malicious AI behavior based on live evidence.
- AI Bill of Materials (AI-BOM) automatically maps models, frameworks, SDKs, and agent systems across source code, cloud inventories, and runtime evidence. This creates a comprehensive, real-time inventory that shows exactly what AI components are running, their locations, and their dependencies.
- AI Network Visibility extends Upwind’s network analysis to decode AI-specific traffic protocols like JSON-RPC and HTTP/2 streaming. It identifies connections to major AI services (e.g., OpenAI, AWS Bedrock) to detect unauthorized usage and flag sensitive data within prompts and inference payloads.
- MCP Security traces the complete sequence of an AI agent’s actions, from the initial prompt through subsequent function calls, file operations, and API interactions. This provides authoritative, runtime-grounded evidence of an agent’s activities, motivations, and ultimate impact on the system.
- AI Security Testing leverages Upwind’s attack surface management to proactively validate AI systems against evolving adversarial techniques. This includes testing for threats outlined in the OWASP Top 10 for LLMs, such as prompt injection, unsafe tool bindings, and hallucination-induced data exposure.
Collectively, these features offer enterprises a singular, integrated platform for managing both cloud and AI risk. The unification reduces operational complexity and provides the foundational clarity needed to support secure and scalable AI innovation. As Shachar notes, real security begins with real evidence; by bringing runtime clarity to AI, Upwind aims to define the next generation of secure artificial intelligence.
(Source: HelpNet Security)

