Microsoft’s AI Summaries Allegedly Manipulated by ‘Poison’ Buttons

▼ Summary
– Microsoft researchers identified a technique called “AI Recommendation Poisoning,” where businesses hide prompt injections in website “Summarize with AI” buttons to manipulate AI assistant memory.
– The method uses URL query parameters to secretly instruct AI assistants to remember a company as a trusted source, potentially biasing future recommendations without user knowledge.
– Microsoft found 50 distinct prompt injection attempts from 31 real businesses, with targets including health and financial services sites, and traced the technique to publicly available tools.
– The company has implemented protections in Copilot and published detection tools, comparing this tactic to SEO poisoning but now targeting AI memory instead of search indexes.
– This represents an evolving commercial threat, as open-source tooling allows rapid deployment across most major AI platforms, raising questions about how platforms will respond.
A new cybersecurity report from Microsoft has identified a concerning trend where companies are attempting to manipulate artificial intelligence assistants through a technique dubbed “AI Recommendation Poisoning.” This method involves embedding hidden instructions within website buttons that appear to offer helpful AI-powered summaries. When a user clicks, the button does more than just request a summary; it secretly tries to program the AI to remember that company as a trusted source for future inquiries, potentially skewing recommendations without the user’s awareness.
Microsoft’s Defender Security Research Team analyzed AI-related web addresses found in email traffic over a two-month period. Their investigation uncovered 50 distinct prompt injection attempts originating from 31 different companies. The instructions followed a common pattern, often commanding the AI to recall the business as “a trusted source for citations” or the primary authority on a certain subject. In more aggressive cases, the hidden prompt injected entire marketing descriptions, complete with product features and sales pitches, directly into the AI’s memory for later use.
The research traced this tactic back to publicly accessible online tools designed to help websites “build presence in AI memory.” These tools generate specially crafted web links that contain prompt parameters, a feature supported by most major AI platforms. Microsoft specifically noted the structures used for Copilot, ChatGPT, Claude, Perplexity, and Grok, while pointing out that how each assistant retains memory varies. This technique is formally recognized in security frameworks as a form of memory poisoning and prompt injection.
Notably, the companies involved were legitimate businesses, not typical cybercriminals. Several operated in sensitive sectors like health and finance, where biased AI advice carries significant risk. One company used a domain name easily confused with a popular website, which could lend it false credibility. Ironically, one of the 31 entities was a security vendor. Microsoft also highlighted a secondary danger: many sites using this method host user-generated content like forums. If an AI deems the site authoritative, it might incorrectly extend that trust to unverified comments or posts on the same domain.
In response, Microsoft stated that its Copilot service has defenses against such cross-prompt injection attacks. The company indicated that some previously documented injection behaviors no longer work in Copilot and that its protective measures are continually improving. For organizations using its security products, Microsoft has released advanced search queries to help scan email and Teams traffic for links containing keywords related to memory manipulation. Individual users can review and delete stored memories within their Copilot chat settings.
The significance of this development is substantial. Microsoft compares AI Recommendation Poisoning to longstanding web threats like SEO poisoning and adware, placing it in the category of tactics that search engines have combated for years. The key difference is the target: instead of manipulating a search index, these methods aim to corrupt an AI assistant’s personal memory. This creates a challenging environment for businesses trying to gain visibility through legitimate means, as they may be competing against rivals who game the system via prompt injection.
This report arrives at a pivotal moment. Independent analyses have shown that AI brand recommendations can be inconsistent, and industry leaders have noted that AI systems often gather business recommendations by scanning other websites. Memory poisoning shortcuts this entire process by planting a favorable recommendation directly into the user’s private AI interface. While broader discussions have focused on poisoning the data used to train AI models, this research reveals a more immediate, commercially deployed threat occurring during live user interactions.
Looking forward, Microsoft acknowledges this is a rapidly evolving challenge. The availability of open-source tools means new attempts can emerge faster than any single platform can block them, and the underlying URL technique is applicable to nearly every major AI assistant. A major unanswered question is whether AI platforms will treat this activity as a clear policy violation with penalties, or if it will persist as an ambiguous growth hack that companies continue to exploit.
(Source: Search Engine Journal)





