Topic: ai security
-
Zscaler Boosts AI Security with Enhanced Visibility and Control
Businesses are rapidly adopting AI, but this creates new security vulnerabilities as traditional cybersecurity tools fail to protect AI systems' unique traffic and protocols. A major challenge is "shadow AI," where companies lack a complete inventory of their AI assets, creating blind spots that ...
Read More » -
Europe Sets New AI Security Standards
ETSI has published a new European standard (ETSI EN 304 223) establishing baseline security requirements specifically for AI systems, addressing unique vulnerabilities in their data pipelines and deployment. The framework tackles AI-specific threats like data poisoning and prompt injection, integ...
Read More » -
Nudge Security Adds AI Governance to Its Platform
Nudge Security has expanded its platform with new features to help organizations manage AI-related security risks, including monitoring conversations and usage to prevent sensitive data leaks. The platform provides visibility into AI tool adoption across departments and detects risky data-sharing...
Read More » -
Top Infosec Products Launched This Week: December 2025
BlackFog launched ADX Vision to prevent data loss from unauthorized AI use by detecting shadow AI activity and blocking unauthorized data transfers in real time on endpoints. Datadog introduced Bits AI SRE, an AI agent that streamlines incident management by quickly identifying root causes to ena...
Read More » -
Upwind Integrates Real-Time AI Security into CNAPP Platform
Upwind has integrated a real-time AI security suite into its CNAPP, moving beyond siloed AI security to provide unified, runtime-first protection for AI workloads within the broader cloud ecosystem. The platform addresses modern AI security challenges by offering key functionalities like posture ...
Read More » -
Metis: AI-Powered Open-Source Security Code Analyzer
Metis is an AI-driven, open-source security analysis tool that identifies subtle vulnerabilities in large or legacy codebases, surpassing traditional scanners. It uses large language models and retrieval augmented generation to understand code context and relationships, providing precise recommen...
Read More » -
Sam Altman: Personalized AI's Privacy Risks
OpenAI CEO Sam Altman identifies AI security as the critical challenge in AI development, urging students to focus on this field due to evolving safety concerns into security issues. He highlights vulnerabilities in personalized AI systems, where malicious actors could exploit connections to exte...
Read More » -
Tenable Uncovers Critical Google Gemini AI Flaws That Risked User Data
Tenable Research uncovered three critical security flaws in Google's Gemini AI, known as the Gemini Trifecta, which allowed attackers to manipulate the AI and steal sensitive user data without direct system access. The vulnerabilities affected components like Gemini Cloud Assist, Search Personali...
Read More » -
Defending Against Adversarial AI Attacks: A Complete Guide
Adversarial AI attacks are a growing threat where subtle data alterations can deceive models into making harmful decisions, requiring both technical and strategic defenses. The book provides practical guidance on creating test environments, executing attacks like data poisoning, and implementing ...
Read More » -
OpenAI's ChatGPT Defense: Why Safety Isn't Guaranteed
OpenAI acknowledges that complete security for its AI-powered Atlas browser may be impossible, highlighting a core tension where the tools' useful capabilities also create significant new cyberattack risks. To proactively find vulnerabilities, OpenAI uses an AI-based automated attacker that simul...
Read More » -
How to Build Trustworthy and Secure AI for Cyber Resilience
Securing AI systems is now as critical as using AI for defense, requiring a shift to cyber resilience that ensures these systems can withstand and recover from sophisticated attacks. The evolving threat landscape includes AI-specific risks like data poisoning, model theft, and prompt injection, n...
Read More » -
CIS, Astrix & Cequence Release AI Security Best Practices
A new partnership between CIS, Astrix Security, and Cequence Security will develop specialized security best practices and guides to extend the CIS Critical Security Controls framework into AI and agentic systems. The initiative will produce two guides focusing on securing AI Agent Environments a...
Read More » -
Trend Vision One: Proactive AI Security for Your Environment
Trend Vision One's AI Security Package, launching in December, provides centralized exposure management and protection across the entire AI application lifecycle, from development to runtime operations. The solution addresses the limitations of conventional security tools by offering specialized ...
Read More » -
AI Cloud Protect: Next-Gen Enterprise Security by Check Point & NVIDIA
AI Cloud Protect is a joint security solution from Check Point and NVIDIA designed to safeguard on-premises enterprise AI environments, protecting the entire AI lifecycle from development to inference without compromising performance. The solution addresses urgent security needs, as over half of ...
Read More » -
Varonis Interceptor: AI-Powered Email Security
AI-powered email threats are becoming more sophisticated, using deceptive phishing tactics that mimic legitimate communications to bypass traditional security measures. Varonis Interceptor employs a multimodal AI approach, combining vision, language, and behavior models to detect and block advanc...
Read More » -
Okta's Identity Security Fabric: Securing the AI-Driven Enterprise
Securing AI systems is a critical enterprise priority, with Okta introducing an identity security fabric to manage non-human identities and combat AI-driven fraud through tamper-proof credentials. A significant security gap exists as rapid AI adoption has outpaced governance, leaving organization...
Read More » -
TeKnowledge Launches AI-Ready Security Suite for Cyber Resilience
TeKnowledge has launched an AI-Ready Security Suite, a managed service to help large enterprises securely manage the risks associated with rapid generative AI adoption, such as prompt injection and data leakage. The suite is built on a three-pillar framework—Assess, Implement, and Optimize—that p...
Read More » -
AI Security Map: How Vulnerabilities Cause Real-World Harm
A single prompt injection vulnerability in an AI chatbot can rapidly expose sensitive data, erode user trust, and trigger regulatory scrutiny, demonstrating how technical flaws can quickly escalate into broader operational and societal consequences. The AI Security Map introduces two interconnect...
Read More » -
Your AI Agents Are Zero Trust's Biggest Blind Spot
The autonomy of AI agents introduces security vulnerabilities in Zero Trust architectures by bypassing continuous verification requirements through inherited or poorly managed credentials. Organizations must adopt the NIST AI Risk Management Framework with a focus on identity governance, ensuring...
Read More » -
Moltbot Rebrands, But Security Issues Persist
Moltbot is a popular open-source AI assistant that automates tasks but requires extensive access to private user accounts and credentials, raising significant security concerns. The tool faces critical vulnerabilities, including common user misconfigurations and a risky trust-based skills library...
Read More » -
Patched FortiGate Firewalls Hacked, Cisco RCE Probed
A critical authentication bypass flaw (CVE-2025-59718) persists in Fortinet firewalls despite patches, while Cisco urgently addressed an exploited RCE vulnerability (CVE-2026-20045), highlighting ongoing challenges in securing network infrastructure. Sophisticated phishing targets the energy sect...
Read More » -
WitnessAI Raises $58M to Tackle Enterprise AI's Top Risk
The rapid adoption of enterprise AI tools like chatbots creates significant security risks, including data leaks and regulatory breaches, prompting new solutions like WitnessAI's $58 million-funded platform. WitnessAI's security platform acts as a protective layer, monitoring and controlling AI i...
Read More » -
Cyera Raises $400M to Expand AI Data Security Platform
Cyera, an AI-powered data security firm, secured a $400 million Series F investment led by Blackstone, tripling its valuation to $9 billion and highlighting strong market demand for AI security solutions. The rapid adoption of AI in enterprises is creating significant security risks, as deploymen...
Read More » -
WWT Launches ARMOR: A Vendor-Agnostic Framework for Secure AI
WWT has launched ARMOR, a vendor-agnostic framework developed with NVIDIA and Texas A&M to secure the entire AI lifecycle from chip design to deployment. The framework is structured around six core security domains, including governance, model security, infrastructure, and data protection, to add...
Read More » -
NIST, MITRE Launch $20M AI Centers for Manufacturing and Cybersecurity
NIST is investing $20 million to establish two AI research hubs, managed by MITRE, to strengthen U.S. technological leadership in manufacturing and cybersecurity. The centers aim to boost domestic manufacturing competitiveness and secure critical infrastructure by developing new technology evalua...
Read More » -
OpenAI Warns AI Browsers Face Permanent Prompt Injection Risk
OpenAI identifies prompt injection attacks, where hidden malicious instructions manipulate AI agents, as a fundamental and likely unsolvable long-term security challenge for AI-powered web browsers. To combat this, OpenAI employs an automated LLM-based attacker that uses reinforcement learning to...
Read More » -
AppGate Secures AI Workloads with Zero Trust Agentic AI Core
AppGate has introduced Agentic AI Core Protection to extend zero-trust security principles directly to AI workloads, enabling secure innovation across on-premises and cloud environments. Traditional security models are inadequate for AI agents, as their exposed interfaces create new attack vector...
Read More » -
Check Point's Quantum Firewall R82.10: AI & Zero Trust Security
Check Point's Quantum Firewall R82.10 update introduces twenty new features focused on AI security, hybrid network protection, and zero-trust architecture without increasing operational complexity. The software enhances security by preventing threats proactively, offering capabilities like monito...
Read More » -
Secure Your Data and AI Strategy for Success
AI security is now a primary business priority, with every new tool being evaluated for its security posture before its functional capabilities. Organizations struggle to balance innovation with robust security, requiring a proactive strategy that embeds security into AI projects from the start. ...
Read More » -
Microsoft's New AI Security Agents Outsmart Hackers
Microsoft has launched advanced AI security agents that proactively identify and neutralize cyber threats, available at no extra cost for Security Copilot users on Microsoft 365 E5 plans. These AI agents are integrated into platforms like Defender, Entra, and Intune to shift security from reactiv...
Read More » -
Microsoft's AI security flaw sparks data theft fears
Microsoft has issued a security warning about its experimental AI agent, Copilot Actions, due to risks that it could be exploited to infect devices and steal sensitive user information. The vulnerabilities are linked to inherent flaws in large language models, including AI hallucinations that pro...
Read More » -
Major AI Firms Expose Sensitive Data in Security Breaches
A majority of top AI companies have exposed sensitive data like API keys and security credentials through code-sharing platforms, affecting firms with a combined valuation over $400 billion. The rapid pace of AI innovation has led to cybersecurity lapses, with vulnerabilities present regardless o...
Read More » -
Google: AI Will Fuel a Cybercrime Surge by 2026
AI is dramatically transforming cybersecurity by fueling a surge in automated cybercrime, including sophisticated phishing, voice cloning, and prompt injection attacks, while also enabling new defense mechanisms. The rise of AI agents and unauthorized tools complicates security management, requir...
Read More » -
Prowler Integrates AI into Security Workflows
Prowler has launched Lighthouse AI and an MCP Server, integrating AI into DevSecOps to speed up risk analysis, compliance, and remediation in multi-cloud environments. These tools enable proactive security by automating decision-making, reducing response times, and embedding security directly int...
Read More » -
Fortinet Unveils End-to-End AI Infrastructure Security
Fortinet has launched the Secure AI Data Center solution, a comprehensive framework designed to protect the entire AI infrastructure, from data centers to applications and large language models, while offering advanced threat defense and reducing energy consumption by an average of 69%. The solut...
Read More » -
Zscaler Buys SPLX to Secure AI Investments
Zscaler has acquired SPLX to enhance its Zero Trust Exchange platform with advanced AI security capabilities, including asset discovery, automated red teaming, and governance tools. The integration addresses the urgent need to secure the entire AI lifecycle, protecting sensitive data like prompts...
Read More » -
Top Infosec Products of October 2025
The cybersecurity landscape in October 2025 saw companies introducing AI-driven solutions to automate security processes, improve visibility, and address evolving digital threats. Innovations included tools for validating defenses, prioritizing vulnerabilities, safeguarding mobile apps, and integ...
Read More » -
Gartner's Top Tech Trends Shaping 2026
Businesses must adapt to transformative technology trends by 2026, focusing on AI, cybersecurity, and data governance to stay competitive and redefine operational models. Key trends include AI security platforms for risk management, preemptive cybersecurity using AI to prevent threats, and confid...
Read More » -
Nexos.ai Secures $30M to Accelerate Enterprise AI Adoption
Nexos.ai raised $30 million in Series A funding, valuing the company at $350 million, to support its mission of enabling secure enterprise AI adoption by acting as a trusted intermediary platform. The platform addresses data security concerns by providing a neutral bridge between employees and AI...
Read More » -
Microsoft Fights 100 Trillion AI Attacks Daily
Microsoft processes over 100 trillion security signals daily, indicating a massive surge in AI-powered cyberattacks that threaten economic stability and personal safety. AI is dual-use, enabling both advanced cyberattacks like autonomous malware and faster defenses, with identity-based attacks an...
Read More » -
Cranium AI Boosts Compliance, Security & Scalability
Cranium AI has launched new agentic AI features to help businesses accelerate AI agent use, simplify compliance, and strengthen security with operational control and automated monitoring. Key products include AgentSensor for visibility into AI agents, CloudSensor for cloud security monitoring, an...
Read More » -
ORCA Opti Wins AI Innovation Award After Acquisition Triumph
ORCA Opti won the 2025 VALA Award for its Virtual Veterans AI chatbot, developed with the State Library of Queensland, which engages users in conversational learning about ANZAC history through a WWI soldier persona. The project was built in three months using extensive historical data, including...
Read More » -
Microsoft Warns AI Could Engineer Biological Threats
A Microsoft report warns that AI could be exploited to design biological threats, such as redesigning toxic proteins, which lowers barriers to creating dangerous agents and highlights the need for stronger global biosecurity. Experts call for enhanced DNA synthesis screening and enforcement mecha...
Read More » -
Inside Google DeepMind Security: A CISO Chat with John 'Four' Flynn
Google DeepMind, under John Flynn's leadership since 2024, is a leading AI research organization focused on innovation and security, evolving from its 2010 founding and 2023 merger with Google Brain. Flynn's cybersecurity career was shaped by early tech fascination and exposure to unsafe environm...
Read More » -
C-Suite's AI Obsession Fuels Critical Security Gaps
Modern organizations face significant security vulnerabilities due to a disconnect between rapid technological adoption and inadequate security practices, with 34% experiencing AI-related breaches. Many companies rely on outdated, reactive metrics like incident frequency, which only assess damage...
Read More » -
AI SSD Subscription Fights Ransomware at Hardware Level
Flexxon has launched the X-Phy Guard Solution, an AI-integrated SSD designed as a final defense against cyberattacks like ransomware, targeting high-security sectors with a subscription starting at $249 annually. The drive's AI engine monitors for threats such as encryption patterns and physical ...
Read More » -
Proofpoint's 4 New Innovations to Secure the Future of Work
The rise of the agentic workspace, where humans and AI agents collaborate, introduces new security challenges that require a fresh, human-centric security approach. Proofpoint has launched innovations to protect AI interactions, including Prime Threat Protection to block malicious prompt injectio...
Read More » -
AI Adoption Fuels Surge in Critical Security Flaws
A significant surge in hardware, API, and network vulnerabilities is creating unprecedented risks, driven by IoT proliferation and resulting in an 88% increase in hardware flaws and a doubling of network vulnerabilities. The rapid integration of AI into software development is expanding the attac...
Read More » -
Unmasking AI's Hidden Prompt Injection Threat
Modern LLMs have developed sophisticated defenses that neutralize hidden prompt injections, ensuring AI systems process information with integrity and prioritize legitimate user instructions over covert manipulation. Technical countermeasures like stricter system prompts, user input sandboxing, a...
Read More » -
Irregular Raises $80M to Fortify Frontier AI Security
Irregular has raised $80 million in a funding round, valuing the company at $450 million, reflecting strong investor confidence in AI security solutions. The company, formerly Pattern Labs, uses its SOLVE framework and simulated environments to test AI models for vulnerabilities and emergent risk...
Read More »