Artificial IntelligenceBigTech CompaniesCybersecurityNewswire

Google Ignores Critical Gemini ASCII Attack

▼ Summary

– Google has decided not to fix a new ASCII smuggling attack in Gemini, which can trick the AI into providing fake information and poisoning its data.
– ASCII smuggling uses invisible Unicode characters to hide payloads that LLMs process but users can’t see, exploiting the gap between user and machine interpretation.
– Gemini’s integration with Google Workspace poses a high risk, as attackers can embed hidden commands in Calendar invites or emails to spoof identities or extract data.
– The researcher reported the issue to Google, but it was dismissed as not a security bug and only exploitable in social engineering contexts.
– Other AI tools like Claude, ChatGPT, and Microsoft CoPilot are secure against ASCII smuggling, implementing input sanitization measures.

A newly identified ASCII smuggling vulnerability within Google’s Gemini AI platform allows attackers to manipulate the system, potentially leading to the dissemination of false information and unauthorized data access. This security flaw exploits special Unicode characters that remain invisible to human users but can be interpreted and acted upon by the AI model, creating a dangerous disconnect between what is displayed and what is processed.

ASCII smuggling represents a class of attack where characters from the Tags Unicode block embed hidden instructions within seemingly normal text. These payloads evade visual detection by people but can be parsed and executed by large language models, effectively poisoning the AI’s responses and behavior. This technique parallels other recently documented attacks against Gemini that similarly exploit discrepancies between user interfaces and machine interpretation, including CSS manipulation and GUI limitation exploits.

While LLM susceptibility to ASCII smuggling isn’t fundamentally new, researchers have investigated such possibilities since generative AI tools emerged, the current threat landscape has evolved significantly. Earlier chatbot vulnerabilities required users to be tricked into pasting malicious prompts manually. Today’s agentic AI systems like Gemini operate with broad access to sensitive user data and can perform tasks autonomously, dramatically increasing the potential impact of successful attacks.

Security researcher Viktor Markopoulos of FireTail cybersecurity firm conducted comprehensive testing of ASCII smuggling against multiple popular AI platforms. His investigation revealed that Google Gemini, DeepSeek, and Grok all demonstrated vulnerability to these attacks through various vectors including calendar invitations, email systems, and social media posts. Conversely, Claude, ChatGPT, and Microsoft CoPilot implemented effective input sanitization measures that successfully blocked ASCII smuggling attempts.

The integration between Gemini and Google Workspace presents particularly concerning attack scenarios. Malicious actors could embed hidden text within Calendar invitations to overwrite organizer details, conceal instructions in meeting titles, or smuggle malicious links into event descriptions, all while the interface appears normal to users. This creates opportunities for identity spoofing and other forms of deception.

Email-based attacks pose another serious threat vector. For users whose LLMs connect directly to their inboxes, a simple message containing hidden commands could instruct the AI to search for sensitive information or transmit contact details to unauthorized parties. This transforms conventional phishing attempts into automated data extraction tools operating without ongoing human intervention.

When large language models are directed to browse websites, they might encounter hidden payloads embedded within product descriptions or other content. These could feed the AI malicious URLs that it might then convey to users as legitimate resources, further extending the attack’s reach.

Despite Markopoulos reporting these findings to Google in September, the technology company has declined to address the vulnerability, characterizing it as outside the scope of security bugs and only exploitable through social engineering. The researcher demonstrated practical attack scenarios where invisible instructions caused Gemini to present a potentially malicious website as a legitimate source for discounted phones, illustrating how the technique can manipulate AI-generated recommendations.

Other technology firms have adopted different approaches to similar security challenges. Amazon, for instance, has published comprehensive guidance regarding Unicode character smuggling risks, indicating that some industry players view these vulnerabilities as requiring proactive mitigation strategies. The contrasting responses highlight varying security postures within the AI development community regarding potential attack vectors that bridge technical and social engineering dimensions.

(Source: Bleeping Computer)

Topics

ascii smuggling 98% gemini security 96% ai vulnerabilities 95% data poisoning 90% unicode exploitation 89% calendar exploitation 88% autonomous threats 87% email security 86% social engineering 85% google workspace 84%