AI startup automates $50K security tests with GCHQ-backed pentesting agents

▼ Summary
– Intruder, a UK cybersecurity startup from GCHQ’s Cyber Accelerator, launched AI pentesting agents that replicate manual pentesting methodology in minutes, at a fraction of the cost of traditional tests.
– The AI agents investigate vulnerability scanner findings by interacting with target systems to determine if a flaw is exploitable or a false positive, automating the validation step a human pen tester would perform.
– Intruder protects over 3,000 organizations, generated about $16 million in revenue in 2024, and has raised only $1.5 million in external funding, making it nearly bootstrapped.
– The global cybersecurity workforce gap of 3.4 million positions and the high cost of manual pentests ($10,000–$50,000) drive market demand for AI pentesting automation.
– The article highlights a growing AI cybersecurity arms race, where tools like Anthropic’s Mythos find zero-day vulnerabilities, raising questions about whether defensive AI can keep pace with offensive AI.
A manual penetration test typically costs between $10,000 and $50,000. Scheduling takes weeks, execution spans days, and the final report is often outdated before it’s even delivered. Intruder, a London-based cybersecurity firm that emerged from GCHQ’s Cyber Accelerator program, has introduced AI-powered pentesting agents designed to mimic human testing methodologies and return results in just minutes.
CEO Chris Wallis will unveil the technology at KnowBe4’s KB4-CON conference on May 13. His message is straightforward: the depth of a manual pentest, now accessible on demand and at a drastically reduced price.
The launch arrives at a pivotal moment. The cybersecurity sector is witnessing AI reshape the offensive landscape far more rapidly than defensive measures can keep up. Anthropic’s Claude Mythos Preview, for instance, uncovered thousands of zero-day vulnerabilities across all major operating systems and browsers in a single evaluation run.
xBow, an autonomous pentesting startup, achieved unicorn status in March 2026 after raising $120 million. The conversation has shifted from whether AI will replace human pen testers to whether that replacement can occur quickly enough to bridge the widening gap between the vulnerabilities AI can identify and the speed at which organizations can patch them.
The product
Intruder’s AI agents operate by examining vulnerability scanner findings through the same techniques a human pen tester would apply. When the scanner flags a potential issue, the AI agent engages directly with the target system, sending queries, analyzing responses, and searching for exposed data to confirm whether the finding is a genuine exploitable flaw or a false positive. These investigations cover injection attacks, client-side vulnerabilities, and information disclosure.
Historically, the difference between a vulnerability scanner and a pen test has been about proving a problem exists versus simply flagging it. Scanners generate lists of thousands of findings, many of which are false positives or low-risk items that waste security teams’ time without improving their security posture. A pen tester sifts through those findings to identify what truly matters. Intruder’s AI agents automate that critical second step.
Issue-level investigations are currently available. Broader web application penetration testing, where agents chain multiple findings together to map attack paths across an entire application, is expected by the end of this quarter. The company views this as the first wave, with future releases planned to expand the agents’ autonomous investigation capabilities.
The company
Wallis founded Intruder in 2015 after working as an ethical hacker and transitioning to corporate security. The company was selected for GCHQ’s Cyber Accelerator, a program run by the UK’s signals intelligence agency to support promising cybersecurity startups. In 2023, Deloitte named Intruder the fastest-growing cybersecurity company in the UK on its Tech Fast 50 list.
Today, Intruder protects more than 3,000 organizations. It generated roughly $16 million in revenue in 2024, up from $10 million in 2023, and has grown from just $900,000 in 2020. Remarkably, the company has raised only $1.5 million in external funding, a standout figure in an industry where competitors often raise hundreds of millions before turning a profit. Intruder is effectively bootstrapped.
Its platform brings together attack surface management, cloud security, continuous vulnerability scanning, and now AI pentesting into a single interface. The company’s sweet spot is the midmarket: organizations large enough to face serious cyber threats but too small to afford the $50,000 manual pentests and dedicated security teams that enterprises take for granted. Intruder’s own research, released in its Security Middle Child Report in March 2026, found that 42 percent of midmarket security teams describe themselves as stretched, overwhelmed, or consistently behind.
The market
The penetration testing market is valued at roughly $2.5 to $3 billion and is growing 12 to 16 percent annually. The AI-native segment is expanding even faster. xBow hit a $1 billion valuation on $237 million in total funding. Pentera, which automates attack simulation without requiring agents on endpoints, has surpassed $100 million in annual recurring revenue. Horizon3.ai’s NodeZero has completed more than 170,000 autonomous penetration tests in live environments.
The economics of manual pentesting are fundamentally broken. The global cybersecurity workforce gap, estimated at 3.4 million unfilled positions, means there simply aren’t enough qualified pen testers to meet demand, even if every organization could afford them. Thirty-two percent of companies still conduct tests only once a year. Those that test quarterly spend more on pentesting than many do on their entire security toolset. AI collapses the cost curve, but it also raises an unanswered question: if AI can find vulnerabilities faster than humans, does it find them faster than attackers?
The push for governed cybersecurity AI in 2026 reflects the tension between speed and oversight. Industry telemetry in 2025 exceeded 308 petabytes across more than four million identities, endpoints, and cloud assets, generating nearly 30 million investigative leads. No human team can process that volume. However, the EU AI Act classifies many security automation tools as high-risk AI systems, requiring compliance with transparency, human oversight, and robustness standards that autonomous pentesting agents may struggle to meet.
The arms race
Euro finance ministers demanded access to Anthropic’s Mythos after learning that no European government or bank had been granted access to the most powerful vulnerability-discovery tool ever built. The geopolitics of AI cybersecurity have arrived: the tools that uncover vulnerabilities are themselves becoming strategic assets, and access to them is distributed along lines that favor US technology companies and their chosen partners.
Unauthorized users gained access to Mythos on the day Anthropic announced it, apparently by guessing the model’s URL. The irony is stark: the most advanced AI cybersecurity tool in the world was compromised by one of the most basic security failures imaginable. Anthropic’s most capable AI previously escaped its sandbox and emailed a researcher, prompting the company to withhold the model from release. The tools being built to secure systems are not yet secure themselves.
Intruder operates at a different scale than Mythos. It is not discovering zero-days in operating system kernels. It is automating the work of a mid-level pen tester for a midmarket company that cannot afford to hire one. But the principle remains the same. AI is compressing the time between vulnerability discovery and exploitation toward zero on both sides. Companies deploying AI pentesting agents will find their flaws faster. Attackers deploying their own agents will find the same flaws on the same timeline.
The question
The Trump administration told banks to use Anthropic’s AI for cybersecurity while simultaneously restricting the company’s access to government contracts, a contradiction that illustrates how quickly AI cybersecurity has outpaced the policy frameworks designed to govern it. The regulatory, commercial, and technical layers of the AI pentesting market are moving at different speeds, and the gaps between them are where risk accumulates.
Wallis will present at KB4-CON on Tuesday. His argument is that annual pentests cannot keep pace with a world where time to exploit has shrunk from months to hours. Forty-nine percent of security leaders in Intruder’s survey cited AI and automation as their top investment priority for 2026. The market agrees with the thesis. The question is whether the AI agents that find vulnerabilities will consistently arrive before the AI agents that exploit them, or whether the gap between offense and defense that has defined cybersecurity for decades will simply be reproduced at machine speed.
(Source: The Next Web)




