Topic: llm security

  • LLMs Infiltrate Your Stack: New Risks at Every Layer

    LLMs Infiltrate Your Stack: New Risks at Every Layer

    The integration of LLMs into enterprises requires a fundamental security shift, moving from treating models as intelligent brains to viewing them as untrusted compute, which is critical for establishing robust trust boundaries. Key technical vulnerabilities include prompt injection, sensitive dat...

    Read More »
  • UK NCSC Warns of Rising Prompt Injection Attack Threats

    UK NCSC Warns of Rising Prompt Injection Attack Threats

    The UK's National Cyber Security Centre warns that prompt injection attacks on large language models (LLMs) may be fundamentally unsolvable, as LLMs inherently do not distinguish between data and instructions. Instead of seeking a perfect fix, organizations must focus on risk reduction by impleme...

    Read More »
  • 1Password's Fix for AI Browser Agent Security Flaws

    1Password's Fix for AI Browser Agent Security Flaws

    1Password introduced Secure Agentic Autofill to protect user credentials during AI-driven web tasks by requiring explicit user approval before sharing login details. The feature ensures AI agents never directly access or store passwords by using a secure, encrypted connection and human verificati...

    Read More »
  • Zscaler Buys SPLX to Secure AI Investments

    Zscaler Buys SPLX to Secure AI Investments

    Zscaler has acquired SPLX to enhance its Zero Trust Exchange platform with advanced AI security capabilities, including asset discovery, automated red teaming, and governance tools. The integration addresses the urgent need to secure the entire AI lifecycle, protecting sensitive data like prompts...

    Read More »