CIS, Astrix & Cequence Release AI Security Best Practices

▼ Summary
– Three cybersecurity organizations have formed a partnership to create new security guidance for AI and agentic systems.
– The guidance will extend the CIS Critical Security Controls framework to address unique AI risks like autonomous decision-making and API access.
– Two specific companion guides will be developed: one for AI Agent Environments and another for Model Context Protocol (MCP) environments.
– The initiative combines expertise in security standards, API security, and application defense to provide comprehensive protection.
– The final guidance and supporting resources are scheduled for release in early 2026.
A new collaboration aims to provide the cybersecurity community with essential guidance for navigating the unique challenges of artificial intelligence. The Center for Internet Security (CIS), Astrix Security, and Cequence Security have formed a strategic partnership to develop specialized security best practices for AI and agentic systems. This initiative will extend the widely recognized CIS Critical Security Controls into the complex world of AI, where automated decision-making and system integrations create novel vulnerabilities.
The partnership plans to initially produce two companion guides that build upon the CIS Controls framework. The first will focus on AI Agent Environments, detailing how to secure the entire lifecycle of an autonomous agent system. The second will address Model Context Protocol (MCP) environments, which are particularly susceptible to risks like credential exposure, ungoverned code execution, and uncontrolled data flows between AI models and external tools. These documents are designed to offer organizations clear, actionable safeguards for dynamic settings where MCP components interact with core enterprise infrastructure.
Curtis Dukes, Executive Vice President and General Manager of Security Best Practices at CIS, emphasized the dual nature of this technological shift. He noted that while AI offers incredible potential, it also introduces significant dangers. This collaborative effort, he stated, is about equipping organizations with the necessary resources to implement AI solutions in a secure and responsible manner.
From Astrix Security, the expertise centers on protecting AI agents and the Non-Human Identities (NHIs) that enable them, such as API keys and OAuth tokens. Jonathan Sander, Field CTO at Astrix, explained that these powerful agents and their identities unlock new capabilities but also create fresh attack surfaces. The goal of their contribution is to help businesses discover, secure, and deploy AI agents with the confidence to scale their use safely. The guidance from this partnership is intended to provide the practical steps needed to maintain security across expanding AI ecosystems.
Cequence Security contributes deep experience in securing enterprise applications and APIs, which is critical for governing what AI agents can access and manipulate. Ameya Talwalkar, CEO of Cequence, pointed out that trust in agentic AI depends entirely on having clear visibility and control over an agent’s permissions and actions. He stressed that security is most effective when built through cooperation, and this alliance provides a clear roadmap for safe AI adoption.
This partnership is structured to support organizations in three key ways. First, it adapts proven cybersecurity frameworks to address the specific risks born from autonomous systems. Second, it delivers prioritized, straightforward safeguards to steer enterprises toward responsible AI implementation. Finally, it merges specialized knowledge across industry standards, API security, and application defense to offer a comprehensive protective strategy.
The completed guidance documents are slated for publication in early 2026. The release will be supported by joint workshops, webinars, and additional resources from all three organizations. The overarching mission is to assist enterprises in putting these recommendations into practice, thereby fostering greater trust, transparency, and resilience within their AI operations. By establishing a shared security framework, the collaboration seeks to create a common language for vendors, security leaders, and businesses to collectively secure the AI landscape.
(Source: HelpNet Security)





