Microsoft Copilot Hijacked in Reprompt Attack for Data Theft

▼ Summary
– Researchers discovered an attack method called “Reprompt” that could let hackers infiltrate a user’s Microsoft Copilot session to steal sensitive data with a single malicious link.
– The attack works by hiding malicious instructions in a URL parameter, which Copilot automatically executes, and then bypassing safeguards to maintain ongoing, hidden data exfiltration.
– It leverages three key techniques: injecting prompts via a URL parameter, a double-request method to bypass initial protections, and a chain-request technique for continuous data theft.
– The vulnerability was responsibly disclosed to Microsoft and has been fixed in a security update, though no real-world exploitation has been detected.
– The flaw only affected Copilot Personal, not the enterprise-focused Microsoft 365 Copilot, which has additional security controls.
A newly identified cybersecurity threat, known as the “Reprompt” attack, demonstrates a method for malicious actors to hijack a user’s Microsoft Copilot session through a single malicious link, leading to the potential theft of sensitive personal data. This technique, uncovered by security researchers, exploits specific vulnerabilities in how Copilot processes web requests, allowing attackers to issue commands and exfiltrate information without the user’s knowledge after just one click.
The attack leverages Copilot’s integration into the Windows operating system and the Edge browser, where it functions as an AI assistant with access to user prompts, conversation history, and certain personal Microsoft account data. The core vulnerability stems from Copilot’s acceptance of prompts via a URL parameter, which executes automatically when a page loads. By embedding malicious instructions within a seemingly legitimate Copilot link and delivering it to a target, often through phishing, an attacker can initiate the compromise.
Security analysts from Varonis detailed the attack flow, which combines three primary techniques to bypass Copilot’s built-in safeguards. First, the Parameter-to-Prompt (P2P) injection uses the ‘q’ parameter in a URL to directly inject commands into the Copilot session. Second, a double-request technique exploits the fact that Copilot’s data-leak protections primarily apply to an initial request. By instructing the AI to perform an action twice and compare results, subsequent requests can circumvent these guardrails. Finally, a chain-request technique enables continuous data theft by having Copilot dynamically receive follow-up instructions from an attacker-controlled server, creating a stealthy back-and-forth exchange.
In a practical demonstration, researchers showed how a secret phrase could be extracted. An initial request, blocked by protections, was followed by a second, successful request after using the double-check prompt. Because the actual exfiltration commands are delivered from the attacker’s server after the initial link click, client-side security tools cannot easily detect what specific data is being stolen, as the malicious intent is hidden in subsequent server communications.
Microsoft was notified of this vulnerability responsibly in August of last year. The company has since issued a security patch to address the issue as part of its January 2026 Patch Tuesday updates. While there are no known instances of this attack being used in active campaigns, applying the latest Windows security updates is strongly advised. It is important to note that this Reprompt method specifically impacted Copilot Personal and not the enterprise-focused Microsoft 365 Copilot, which benefits from additional security layers like Purview auditing and data loss prevention controls.
(Source: Bleeping Computer)





