AI & TechArtificial IntelligenceBigTech CompaniesCybersecurityNewswireWhat's Buzzing

Gemini CLI flaw lets hackers execute malicious commands

▼ Summary

– Researchers exploited Google’s Gemini CLI within 48 hours, making it secretly send sensitive data to an attacker’s server.
– Gemini CLI is a free, open-source AI tool that assists developers with coding directly in the terminal using Google’s Gemini 2.5 Pro model.
– The attack bypassed security controls by tricking users into describing a malicious code package and adding a harmless command to an allow list.
– The malicious package appeared normal, with the only harmful content hidden in a README.md file, a common target for prompt-injection attacks.
– Prompt-injection attacks, like this one, are a major threat to AI chatbots, as they exploit natural-language instructions in overlooked files.

Google’s Gemini CLI tool, designed to assist developers with AI-powered coding, was found vulnerable to malicious command execution within days of its release. Security researchers uncovered a flaw allowing attackers to bypass built-in safeguards and secretly extract sensitive data through carefully crafted prompts.

The command-line interface tool connects to Google’s Gemini 2.5 Pro model, offering coding assistance directly in terminal windows rather than traditional text editors. While positioned as a productivity booster for developers, its security protections proved insufficient against sophisticated prompt injection attacks.

Just two days after Gemini CLI’s June 25 launch, Tracebit researchers demonstrated how attackers could exploit the system. The method required two simple steps: having the tool analyze a seemingly harmless code package and adding an innocent-looking command to an approved list. What made the attack particularly concerning was its use of ordinary documentation files to hide malicious instructions.

The researchers planted their exploit within a standard README.md file – the type of documentation developers routinely include with code packages. While human programmers might skim these files, the AI system processes them thoroughly, making it susceptible to hidden prompts buried in natural language descriptions. The attack leveraged this behavioral difference between human and machine reading patterns.

Prompt injection attacks represent one of the most significant threats facing AI-assisted development tools. This incident highlights how even carefully designed systems can be compromised through unexpected vectors. The researchers’ approach didn’t require obviously malicious code – the payload resided entirely in documentation that would appear legitimate during casual inspection.

Security experts note this vulnerability underscores the challenges of securing AI tools that interact directly with system commands. Unlike traditional development environments with clear separation between documentation and executable code, AI-powered assistants interpret natural language instructions that can blur these boundaries. The Gemini CLI case demonstrates how attackers can weaponize this capability difference to bypass security measures.

(Source: Ars Technica)

Topics

google gemini cli exploit 95% prompt injection attacks 90% ai tool security 85% malicious code documentation 80% developer tool vulnerabilities 75%