Irregular Raises $80M to Fortify Frontier AI Security

▼ Summary
– Irregular secured $80 million in funding led by Sequoia Capital and Redpoint Ventures, valuing the company at $450 million.
– The firm focuses on securing AI systems against risks from human-AI and AI-AI interactions, which it believes will challenge current security stacks.
– Previously known as Pattern Labs, Irregular is recognized for its AI evaluations and its SOLVE framework used in industry vulnerability assessments.
– The company aims to identify emergent AI risks through simulated environments where AI models act as both attackers and defenders.
– AI security is a growing industry concern, with models increasingly capable of finding software vulnerabilities, posing risks and opportunities for attackers and defenders.
AI security firm Irregular has secured $80 million in a funding round led by Sequoia Capital and Redpoint Ventures, with Wiz CEO Assaf Rappaport also participating. According to sources familiar with the transaction, the investment values the company at $450 million. This substantial financial backing underscores growing investor confidence in specialized security solutions tailored for advanced artificial intelligence systems.
Dan Lahav, co-founder of Irregular, emphasized the shifting landscape of digital interactions, noting that economic activity will increasingly stem from human-AI and AI-AI engagements. He warned that these evolving dynamics will inevitably expose vulnerabilities across multiple points in existing security infrastructures.
Formerly operating under the name Pattern Labs, Irregular has already established itself as a significant contributor to AI safety evaluations. The company’s methodologies are referenced in security assessments for leading models such as Claude 3.7 Sonnet and OpenAI’s o3 and o4-mini. Its proprietary framework, known as SOLVE, which measures a model’s ability to detect vulnerabilities, has gained widespread adoption across the industry.
While the firm has focused extensively on identifying known risks in AI models, this new funding will support an even more forward-looking objective: detecting emergent risks and behaviors before they manifest in real-world applications. Irregular has developed sophisticated simulated environments that allow for rigorous pre-deployment testing of AI systems.
Co-founder Omer Nevo described their approach, explaining that the company runs complex network simulations where AI models assume both offensive and defensive roles. This dual-perspective testing helps identify weaknesses in a model’s security posture long before it reaches public use.
The heightened focus on AI security comes at a critical time, as frontier models introduce unprecedented capabilities, and risks. Recent months have seen major players like OpenAI revamp internal security protocols, partly in response to concerns around corporate espionage and unintended model behaviors.
Adding another layer of complexity, AI systems are becoming increasingly proficient at identifying software vulnerabilities. This capability presents a double-edged sword, empowering both cybersecurity defenders and potential malicious actors.
For the team at Irregular, these developments represent just the beginning of a long-term challenge. Lahav framed the company’s mission clearly: “If frontier labs aim to build more sophisticated models, our role is to secure them.” He acknowledged the inherent difficulty of the task, describing it as a moving target that will demand continuous innovation and effort.
The road ahead remains long, but with fresh capital and industry backing, Irregular is positioning itself at the forefront of AI security, ready to address the complex threats posed by next-generation artificial intelligence.
(Source: TechCrunch)