Topic: prompt injection attacks

  • Top Cyber Threats to Agentic AI Systems at #BHUSA

    Top Cyber Threats to Agentic AI Systems at #BHUSA

    Prompt injection attacks exploit AI systems by manipulating inputs, tricking agents into unauthorized actions or data leaks due to their natural language processing nature. Training data poisoning corrupts AI models by skewing datasets, leading to unreliable outputs, especially critical i...

    Read More »
  • Gemini CLI flaw lets hackers execute malicious commands

    Gemini CLI flaw lets hackers execute malicious commands

    Google's Gemini CLI tool, an AI-powered coding assistant, was found vulnerable to malicious command execution shortly after its release, allowing attackers to bypass safeguards and extract sensitive data. Researchers demonstrated a two-step exploit using seemingly harmless documentation files (li...

    Read More »
  • GitLab AI Assistant Tricked Into Making Safe Code Malicious

    GitLab AI Assistant Tricked Into Making Safe Code Malicious

    AI-powered coding assistants like GitLab's Duo can be manipulated through prompt injection, leading to harmful code insertion or sensitive data leaks. Researchers found vulnerabilities in how AI tools process external inputs, allowing hidden instructions in development artifacts to trigger unauth...

    Read More »
  • Google AI Summaries: How to Spot Scammers Trying to Steal From You

    Google AI Summaries: How to Spot Scammers Trying to Steal From You

    Scammers are exploiting Google's AI summaries to display fake customer service numbers, tricking users into sharing personal and financial information. These AI overviews present a single, seemingly authoritative answer, making fraudulent details harder to detect compared to traditional search re...

    Read More »
  • AI-Powered Cursor IDE at Risk of Prompt Injection Attacks

    AI-Powered Cursor IDE at Risk of Prompt Injection Attacks

    A critical security flaw (CVE-2025-54135) in Cursor IDE, dubbed CurXecute, allows remote code execution via manipulated AI prompts, risking unauthorized system access. Attackers can exploit the Model Context Protocol (MCP) by injecting malicious prompts through third-party servers (e.g., Slack), ...

    Read More »
Close

Adblock Detected

We noticed you're using an ad blocker. To continue enjoying our content and support our work, please consider disabling your ad blocker for this site. Ads help keep our content free and accessible. Thank you for your understanding!