AI Agent Credentials Leaked: 29 Million Secrets Exposed in 2025

▼ Summary
– AI-assisted code commits leaked secrets at roughly double the baseline rate in 2025, indicating a governance failure at AI development speeds.
– The use of multiple AI service providers has become standard, multiplying credential types and surfaces, with AI-service secret exposures growing 81% year-over-year.
– Every layer of the full AI application stack, from databases to orchestration frameworks, shows a governance gap with credential leak rates surging, often by hundreds of percent.
– New integration standards like Model Context Protocol can codify insecure patterns, as developers copy example configurations that default to hardcoded static secrets.
– Effective AI authentication requires treating agents as governed non-human identities with short-lived, scoped credentials and event-driven lifecycle management, not static keys.
The explosive growth of AI agent development is creating a parallel surge in security vulnerabilities, with nearly 29 million new secrets exposed in public code repositories last year. This staggering 34% annual increase highlights a fundamental mismatch: authentication governance is failing to keep pace with the speed of AI-assisted software creation. Every integration point for an agent, from LLM platforms to cloud databases, requires an identity, and the current approach to managing these credentials is fundamentally broken.
A critical finding from recent research reveals that commits involving AI coding assistants leaked secrets at roughly double the baseline rate. This isn’t about a specific tool being careless. It reflects a systemic issue where development velocity outstrips security protocols. Developers can now scaffold projects and test integrations in minutes, committing functional prototypes long before considering where credentials should reside, who owns them, or how they will be rotated. AI-generated code often appears production-ready prematurely, and the resulting gap is frequently filled with hardcoded API keys and other static secrets.
The shift toward multi-provider integration compounds the problem. Teams routinely connect to several LLM platforms and AI services for resilience and capability matching, a practice that multiplies the credential surfaces in a single project. One gateway service that provides access to multiple models saw its associated credential leaks grow more than 48-fold year-over-year. This pattern means more API keys are created per project, offering more opportunities for insecure secrets to proliferate across codebases. The authentication model has simply not evolved to handle this new, multi-service reality.
This credential sprawl extends across the entire AI application stack. Each layer, from orchestration and monitoring to data services and retrieval, requires authentication and exhibits the same governance gap. Database platforms designed for AI applications, particularly those enabling vector search, saw associated leak rates jump by nearly 1,000%. Orchestration frameworks and agent-building platforms showed similar explosive growth, between 500% and 600%. These platforms are especially risky as their access tokens often carry broad, account-level permissions. Every leaked key represents an integration where development speed was prioritized over implementing properly scoped, secure identities.
Alarmingly, new technical standards are codifying these insecure patterns. Analysis of the emerging Model Context Protocol (MCP), designed to connect LLMs to tools and data, found tens of thousands of unique secrets exposed in its configuration files. The issue is not that the standard itself is flawed, but that its example implementations often demonstrate authentication via hardcoded credentials. When developers copy and adapt these examples, the insecure pattern becomes the de facto implementation, spreading rapidly across the ecosystem.
Addressing this crisis requires moving beyond detection and fixing the problem upstream. AI agents must be treated as governed non-human identities, each with a unique identity, scoped permissions, and a clear owner. The reliance on static secrets must be replaced with modern mechanisms like OAuth 2.1 for SaaS integrations and workload identity federation for cloud resources. When API keys are unavoidable, they require vault-backed storage, strict uniqueness per agent, enforced expiration, and continuous monitoring.
Furthermore, credential lifecycle management must become event-driven. Rotation should trigger on deployment updates or configuration changes, not arbitrary calendar dates. Every autonomous system must have a tested revocation capability. The data is clear: the dominant issue is lifecycle negligence, where long-lived secrets account for the majority of policy violations. Secrets are living too long, spreading too widely, and being copied faster than they can be governed.
This represents a profound velocity gap. AI accelerates software production, which in turn accelerates identity creation. When identities proliferate at this speed, secrets spread faster than traditional governance mechanisms can adapt. The doubled leak rate in AI-assisted commits is an architectural warning. Current authentication models assume human-paced integration with manual governance checkpoints, but AI eliminates those natural slowdowns. Organizations must now rebuild their authentication governance for the age of AI, or they will continue to accumulate credential risk faster than they can possibly manage it.
(Source: Help Net Security)




