Topic: prompt injection attacks
-
Top Cyber Threats to Agentic AI Systems at #BHUSA
Prompt injection attacks exploit AI systems by manipulating inputs, tricking agents into unauthorized actions or data leaks due to their natural language processing nature. Training data poisoning corrupts AI models by skewing datasets, leading to unreliable outputs, especially critical i...
Read More » -
Gemini CLI flaw lets hackers execute malicious commands
Google's Gemini CLI tool, an AI-powered coding assistant, was found vulnerable to malicious command execution shortly after its release, allowing attackers to bypass safeguards and extract sensitive data. Researchers demonstrated a two-step exploit using seemingly harmless documentation files (li...
Read More » -
AI Chatbots Vulnerable to Memory Attacks That Steal Cryptocurrency
AI-powered cryptocurrency bots are vulnerable to memory manipulation attacks, allowing hackers to hijack transactions and redirect funds using simple text prompts. The ElizaOS platform enables AI agents to manage blockchain transactions, but manipulated prompts can corrupt their memory, leading t...
Read More » -
GitLab AI Assistant Tricked Into Making Safe Code Malicious
AI-powered coding assistants like GitLab's Duo can be manipulated through prompt injection, leading to harmful code insertion or sensitive data leaks. Researchers found vulnerabilities in how AI tools process external inputs, allowing hidden instructions in development artifacts to trigger unauth...
Read More » -
Google AI Summaries: How to Spot Scammers Trying to Steal From You
Scammers are exploiting Google's AI summaries to display fake customer service numbers, tricking users into sharing personal and financial information. These AI overviews present a single, seemingly authoritative answer, making fraudulent details harder to detect compared to traditional search re...
Read More » -
OpenAI warns of 'high' AI weaponization risk, unveils countermeasures
OpenAI warns of a high risk of AI weaponization for cyberattacks and is implementing countermeasures to safeguard its technology while aiding defenders. AI's rapid advancement, demonstrated by a sharp increase in performance on cybersecurity challenges, suggests it could soon develop sophisticate...
Read More » -
AI-Powered Cursor IDE at Risk of Prompt Injection Attacks
A critical security flaw (CVE-2025-54135) in Cursor IDE, dubbed CurXecute, allows remote code execution via manipulated AI prompts, risking unauthorized system access. Attackers can exploit the Model Context Protocol (MCP) by injecting malicious prompts through third-party servers (e.g., Slack), ...
Read More » -
Google Unveils Chrome's AI Security Safeguards
The rise of intelligent web agents introduces significant security risks, including data loss and financial threats, prompting Google to develop a multi-layered security framework for Chrome to keep these features safe and user-controlled. Google's security approach includes a User Alignment Crit...
Read More »