How to Stop AI Agents From Draining Your Credit Cards

▼ Summary
– The FIDO Alliance announced two working groups, with initial contributions from Google and Mastercard, to develop industry standards for validating and protecting AI agent transactions.
– The standards aim to create anti-phishing authorization mechanisms, cryptographic tools to verify legitimate agent actions, and privacy-preserving frameworks for transaction validation.
– FIDO Alliance CEO Andrew Shikiar emphasized the need to establish foundational security principles for agentic AI to avoid repeating the password security failures of the past.
– Google contributed its Agent Payments Protocol (AP2) for cryptographic verification of user intent, while Mastercard contributed its Verifiable Intent framework for user authorization and control.
– The initiative seeks to build trust in agentic AI by providing baseline protections against agent hijacking and enabling accountability for disputes.
The digital threat landscape is already crowded with malware, identity theft, and account hijacking. Now, agentic AI introduces a new layer of risk: software acting on behalf of humans, which can go wrong in unexpected ways. On Tuesday, the FIDO Alliance, an authentication-focused industry group, announced it is launching two working groups to develop industry standards for securing payments and transactions executed by AI agents. This initiative builds on initial contributions from Google and Mastercard.
The core mission is to create a protective baseline that can be adopted across industries. This would allow users to authorize agent actions using mechanisms resistant to phishing or takeover by bad actors issuing rogue instructions. The standards will incorporate cryptographic tools that digital services can use to verify an agent is accurately and legitimately executing an authenticated person’s commands. They will also include privacy-preserving frameworks enabling users, merchants, and service providers to validate agent-initiated transactions. Ultimately, the goal is to prevent agent hijacking and other malicious behavior while establishing transparency and accountability for dispute resolution.
“Agents are becoming more and more common, they’re moving into mainstream use, but preexisting models aren’t necessarily designed for this sort of paradigm,they weren’t built to contemplate actions performed on a user’s behalf,” said Andrew Shikiar, CEO of the FIDO Alliance, in a statement. He added, “If we look back on our work in recent years on the massive problem space of passwords, that originated decades ago. The security foundation for what became our connected economy wasn’t fit for purpose. Now we’re at a similar precipice with agentic agents and agentic interactions, agentic commerce where we have an opportunity to not go down that same path and establish some foundational principles that will allow for more trusted interactions.”
Developing industry-wide technical standards is typically a slow, painstaking process that can take years. But given the rapid pace of agentic AI adoption, representatives from the FIDO Alliance, Google, and Mastercard stressed the need for speed. To accelerate progress, both companies are contributing open-source tools. Google’s Agent Payments Protocol (AP2) provides a mechanism for cryptographically verifying that a user genuinely intended a specific agent-initiated transaction. Mastercard’s Verifiable Intent framework, codeveloped with Google to work alongside AP2, offers a secure way for users to authorize and control agent actions.
“We want to provide cryptographic proof that a transaction was authorized by the user themself, but keep it private so there is built-in selective disclosure,” said Stavan Parikh, Google’s vice president and general manager of payments. “Different players in the ecosystem,platforms, merchants, payment providers, networks,only see the information that’s relevant to them, but the right action gets fulfilled at the right time. Payments is a complex ecosystem problem.”
Parikh illustrated the concept with a real-world scenario: a person wants to buy a pair of sneakers that are currently sold out. The buyer instructs an AI agent to autonomously purchase the sneakers if they come back in stock at $100 or less. The goal of the new standards is to provide authentication and transparency around that transaction, ensuring the consumer ends up with the right shoes at the intended price.
Establishing these baseline protections is essential for building trust in agentic AI and encouraging adoption of AI-powered tools, Parikh noted. But even for users who prefer to avoid AI, the reality of its proliferation means minimum guardrails are necessary either way.
(Source: Wired)



