CISA Expands AI Role in Cybersecurity Program

▼ Summary
– A CISA leader stated AI companies like OpenAI and Anthropic should have a greater role in the Common Vulnerabilities and Exposures (CVE) program.
– Anthropic launched Claude Mythos Preview, an AI model that autonomously discovered thousands of new vulnerabilities in testing.
– OpenAI released GPT-5.4-Cyber, a cybersecurity-focused AI model available to a restricted group of users.
– Vulnerability reports are accelerating, with forecasts predicting 50,000 to over 70,000 new CVEs will be recorded in 2026.
– The CVE program is expanding its contributors and aims to grow the number of authorized vulnerability reporting organizations (CNAs).
The rapid integration of artificial intelligence into cybersecurity is prompting a fundamental shift in how software vulnerabilities are managed. A senior leader from the world’s largest vulnerability disclosure program has publicly called for AI developers to take on a more significant role. Speaking at the opening of VulnCon26, Lindsey Cerkovnik, who heads the Vulnerability Response & Coordination Branch at CISA, stated that firms like OpenAI and Anthropic should be better represented within the Common Vulnerabilities and Exposures program. As the sole sponsor of the MITRE-run initiative, CISA manages coordinated disclosures, and Cerkovnik acknowledged the program is at a turning point due to the accelerating volume of reports and the emergence of new AI tools.
This call to action coincides with major advancements from leading AI labs. Just days before the conference, Anthropic launched Claude Mythos Preview, a large language model designed to autonomously discover and remediate security flaws at scale. Currently available only to members of Project Glasswing, the model has reportedly identified thousands of previously unknown zero-day vulnerabilities during testing. In one demonstration, it autonomously chained several flaws within the Linux kernel to create an exploit path from basic user access to full system control. However, initial assessments from the UK’s AI Security Institute noted that its effectiveness against well-defended production systems remains uncertain.
Similarly, OpenAI has introduced GPT-5.4-Cyber, a version of its model fine-tuned specifically for cybersecurity applications. Access is restricted to participants in its Trusted Access for Cyber Defense program. These developments signal a new era where AI-powered vulnerability research could drastically increase the number of flaws entering public databases.
This potential surge comes atop an already steep growth curve. The CVE program currently lists over 327,000 unique records. Analysis by Cisco’s Jerry Gamblin shows 18,247 CVEs were reported in the first part of 2026, marking a 27.9 percent increase from the same period last year. The daily average has risen to 174, up from 132 in 2025. The Forum of Incident Response and Security Teams, which co-hosts VulnCon, previously forecast a record 50,000 new CVEs for the full year. Gamblin projects an even higher figure of roughly 70,135, which would represent a 45.6 percent annual growth rate.
Cerkovnik’s push for greater AI company involvement aligns with the CVE program’s strategy to diversify and expand its contributor base. A key objective is growing the roster of CVE Numbering Authorities, the organizations authorized to assign public identifiers to vulnerabilities. This effort was advanced last July with the creation of two new forums, the Consumer Working Group and the Researcher Working Group. By the end of March 2026, the program announced it had surpassed 500 contributors, with 502 CNAs now registered. Formally integrating AI firms as CNAs could be a logical next step, positioning them as official vulnerability reporters within a framework struggling to manage the flood of data their own technologies are helping to create.
(Source: Infosecurity Magazine)




