AI-Powered Cursor IDE at Risk of Prompt Injection Attacks

▼ Summary
– A vulnerability called CurXecute (CVE-2025-54135) affects most versions of the AI-powered Cursor IDE, allowing remote code execution with developer privileges via malicious prompts.
– The flaw exploits Cursor’s Model Context Protocol (MCP), which connects AI agents to external tools, exposing them to untrusted data that can hijack sessions.
– Attackers can inject malicious prompts into third-party MCP servers (e.g., Slack) to rewrite configuration files and execute arbitrary commands without user consent.
– Successful exploitation could lead to ransomware, data theft, or AI manipulation, as demonstrated in a proof-of-concept video by researchers.
– Cursor patched the vulnerability in version 1.3 (released July 29) and advises users to update to mitigate security risks.
A critical security flaw dubbed CurXecute has been discovered in the AI-driven Cursor IDE, putting developers at risk of remote code execution attacks through manipulated prompts. The vulnerability, officially tracked as CVE-2025-54135, allows attackers to hijack the integrated development environment by feeding malicious instructions to its AI agent, potentially leading to unauthorized system access.
Cursor’s AI-powered coding assistant relies on the Model Context Protocol (MCP), an open-standard framework that enhances functionality by linking to external tools like Slack, GitHub, and databases. While this feature boosts productivity, it also introduces risks, untrusted data sources can influence the AI agent’s behavior, opening pathways for exploitation.
Security experts at Aim Security uncovered that attackers could inject harmful prompts into third-party MCP servers, such as public Slack channels. When a developer asks the AI to summarize these messages, the payload executes automatically, rewriting the `~/.cursor/mcp.json` configuration file without user consent. This bypasses security checks, enabling arbitrary command execution with the victim’s privileges.
The threat isn’t hypothetical. Researchers demonstrated how a single poisoned document could transform the AI agent into a backdoor, granting attackers control over local systems. Potential consequences include ransomware deployment, data theft, and even AI hallucination attacks that corrupt projects or enable domain squatting.
Cursor addressed the issue swiftly after private disclosure, releasing a patch in version 1.3 on July 29. The fix mitigates the vulnerability, which received a medium-severity CVSS score of 8.6. Developers using older versions remain exposed, making immediate updates critical.
This incident highlights broader concerns about AI-assisted tools and their susceptibility to prompt injection attacks. As coding environments increasingly integrate generative AI, robust safeguards against malicious inputs become essential to prevent similar exploits in the future.
(Source: Bleeping Computer)