Anthropic’s Official Git MCP Server Exposes Prompt Injection Bugs

▼ Summary
– Three vulnerabilities were found in Anthropic’s official Git server for the Model Context Protocol (MCP), named mcp-server-git.
– The flaws are exploitable via prompt injection, allowing attackers to manipulate AI assistants by influencing what they read, such as through a malicious file.
– These vulnerabilities enable actions like arbitrary code execution, file deletion, and loading arbitrary files into an AI model’s context when the Git server is used with a filesystem server.
– The issues are notable because they affect the default, “out-of-the-box” configuration of all versions released before December 8, 2025, increasing real-world risk.
– The root cause is improper validation of repository paths and arguments in Git commands, and fixes have been released, with users advised to update immediately.
Cybersecurity experts have identified three significant security flaws within the official Git server designed for Anthropic’s Model Context Protocol (MCP). These vulnerabilities, present in all versions released before December 8, 2025, can be exploited through prompt injection attacks. This method allows malicious actors to manipulate AI assistants into performing unauthorized actions without requiring any direct access to the target system or its credentials. The discovery is particularly concerning because it impacts Anthropic’s own reference implementation of the MCP standard, a foundational component for secure AI tool integration.
The research, conducted by the firm Cyata, reveals that an attacker only needs to influence the information an AI assistant processes. This could be achieved through a malicious README file in a Git repository, a poisoned issue description, or even a compromised webpage that the assistant is instructed to read. Once triggered, these flaws enable several dangerous outcomes. Attackers could potentially execute arbitrary code when the Git server is used alongside a filesystem MCP server, delete files anywhere on the host system, or load arbitrary files into the language model’s context for processing. While the vulnerabilities do not directly steal data, they can expose sensitive files to the AI, creating serious privacy and security risks downstream.
What makes these findings stand out is their “out-of-the-box” nature. Previous security issues related to MCP often depended on unusual or unsafe configurations. In this instance, the vulnerabilities were effective in default installations, significantly raising the potential for real-world exploitation. The core of the problem lies in how the `mcp-server-git` implementation handles certain commands. The server fails to properly validate repository paths or sanitize arguments passed to underlying Git commands. This lack of validation allows an attacker to direct the server to operate on any directory on the system, far beyond the intended repository.
For example, unsanitized arguments passed to the `gitdiff` command could be manipulated to overwrite critical files. Similarly, misuse of the `gitinit` function can lead to arbitrary file deletion or set the stage for code execution when combined with other capabilities, like file writing from a separate MCP server. The open design of MCP, which allows AI models to interact with tools like filesystems and databases, inherently raises the attack surface if the bridging servers are not meticulously secured.
The specific vulnerabilities have been cataloged under the identifiers CVE-2025-68143, CVE-2025-68144, and CVE-2025-68145. Anthropic acknowledged the reports in September and issued necessary fixes in December 2025. Cyata strongly advises all users of the affected software to update their installations immediately. Furthermore, organizations should review how different MCP servers are combined in their environments, paying special attention to scenarios where both Git and filesystem access are enabled simultaneously, as this combination can amplify the impact of these flaws.
(Source: InfoSecurity Magazine)





