AI & TechArtificial IntelligenceCybersecurityFintechNewswire

Trump Officials Urge Banks to Test Anthropic’s AI Model

Originally published on: April 13, 2026
▼ Summary

– U.S. Treasury and Fed officials encouraged bank executives to use Anthropic’s new Mythos AI model to detect system vulnerabilities.
– While JPMorgan Chase is an initial partner, several other major banks are also reportedly testing the Mythos model.
– Anthropic is limiting access to Mythos, citing its strong, albeit unspecialized, ability to find security flaws.
– This occurs as Anthropic is in a legal dispute with the U.S. government over being labeled a supply-chain risk.
– U.K. financial regulators are separately discussing the potential risks associated with the Mythos model.

In a notable development this week, senior U.S. financial regulators have formally advised major banks to evaluate a powerful new artificial intelligence tool. Treasury Secretary Scott Bessent and Federal Reserve Chair Jerome Powell convened a meeting with leading bank executives, urging them to test Anthropic’s Mythos model for identifying system vulnerabilities. This guidance, reported by Bloomberg, highlights the government’s proactive stance on leveraging advanced AI for financial sector cybersecurity, even as the administration is engaged in a separate legal dispute with the company.

The recommendation comes alongside news that several Wall Street giants are already examining the model’s capabilities. While JPMorgan Chase was named as an initial launch partner, other institutions including Goldman Sachs, Citigroup, Bank of America, and Morgan Stanley are reportedly conducting their own assessments. This broad industry interest underscores the potential perceived value of the AI system for risk management and security protocols.

Anthropic itself has adopted a cautious rollout strategy for Mythos, citing its unexpected proficiency in uncovering security flaws. The company announced it would limit initial access, noting the model was not specifically designed for cybersecurity yet performs the task exceptionally well. Some observers view this limited release as a strategic business move, possibly designed to generate market demand and carefully control early implementation.

The situation presents a complex paradox. On one hand, federal regulators are actively promoting the adoption of this AI model within critical financial infrastructure. On the other, the Trump administration is currently litigating against Anthropic over a Defense Department ruling. The DOD labeled the company a supply-chain risk after discussions broke down regarding restrictions on how government agencies could utilize its AI technology. This juxtaposition reveals the nuanced and sometimes contradictory pressures facing AI firms operating in regulated, high-stakes industries.

Concerns over the model are not confined to the United States. According to the Financial Times, U. K. financial regulators are also engaged in discussions about the potential risks associated with the Mythos system. Their scrutiny indicates that the global financial community is grappling with the dual imperative of harnessing innovative AI tools while thoroughly understanding their implications for systemic stability and security.

(Source: TechCrunch)

Topics

mythos model 100% banking sector adoption 95% government-bank meeting 90% cybersecurity vulnerabilities 88% anthropic legal battle 85% ai access limitation 82% supply-chain risk 80% uk financial regulation 78% ai enterprise sales 75% government ai use 73%