Databricks & Noma Solve CISO AI Security Risks

▼ Summary
– AI inference is the most vulnerable stage for security threats like prompt injection, data leaks, and model jailbreaks, hindering enterprise AI deployments.
– Databricks Ventures and Noma Security secured $32 million in Series A funding to address AI inference-stage security gaps with real-time threat analytics and runtime defenses.
– Gartner predicts over 80% of unauthorized AI incidents will stem from internal misuse by 2026, driving demand for advanced AI Trust, Risk, and Security Management (TRiSM) solutions.
– Noma’s proactive red teaming identifies vulnerabilities pre-production, ensuring AI model integrity and robust runtime protection against adversarial attacks.
– The Databricks-Noma partnership combines Lakehouse architecture governance with real-time threat mitigation to secure AI workflows from development to production.
AI security risks at the inference stage have become a top concern for enterprises scaling their artificial intelligence initiatives. Chief Information Security Officers (CISOs) recognize that live AI models interacting with real-world data create vulnerabilities, exposing organizations to threats like prompt injection, data leaks, and model manipulation.
Databricks Ventures and Noma Security are tackling these challenges through a strategic partnership, supported by a $32 million Series A funding round led by Ballistic Ventures and Glilot Capital. This collaboration focuses on closing critical security gaps that have slowed enterprise AI adoption.
Niv Braun, CEO of Noma Security, emphasized the urgency of the issue: “Security remains the biggest barrier to widespread AI deployment.” By integrating real-time threat analytics, runtime protections, and proactive adversarial testing into workflows, the partnership aims to help businesses adopt AI with greater confidence.
The Critical Need for Runtime AI Security
Braun added that runtime defenses must evolve alongside AI complexity, requiring continuous monitoring to prevent unauthorized data exposure and adversarial attacks. Gartner research supports this, predicting that 80% of AI security incidents through 2026 will stem from internal misuse rather than external breaches, highlighting the need for integrated governance.
Proactive Red Teaming Strengthens AI Integrity
“Red teaming isn’t optional—it’s essential,” Braun said. By stress-testing AI systems before production, enterprises can detect and mitigate risks early, ensuring models remain resilient against evolving threats.
How Databricks and Noma Counter AI Threats
- Prompt Injection – Malicious inputs that hijack model behavior.
- Data Leakage – Unintended exposure of sensitive information.
- Model Jailbreaking – Circumventing built-in safety controls.
By combining Noma’s runtime monitoring with Databricks’ governance frameworks, the solution aligns with industry standards like OWASP and MITRE ATLAS, ensuring compliance with regulations such as the EU AI Act.
Databricks Lakehouse: A Secure AI Foundation
Braun noted that automated mapping to security frameworks simplifies regulatory adherence, embedding compliance directly into operational workflows.
Scaling AI with Confidence
Ferguson summarized the mission: “Enterprises need end-to-end security—especially at runtime. Our partnership ensures they can scale AI securely.”
By addressing inference-stage risks head-on, the initiative aims to unlock AI’s full potential while keeping threats in check.
(Source: VentureBeat)