PROBABLYPWNED
AnnouncementsFebruary 4, 20265 min read

Cisco Maps the Five Domains of AI Security

New taxonomy from Cisco's CISO and security leadership defines five AI security domains and the organizational functions needed to secure enterprise AI systems.

ProbablyPwned Team

Cisco's security leadership team has published a taxonomy that organizes enterprise AI security into five distinct domains, aiming to solve what they call a "shared language" problem plaguing executive boardrooms and security teams alike.

The framework comes from Omar Santos (Distinguished Engineer and co-chair of the Coalition for Secure AI), Jason Lish (Cisco's CISO), and Larry Lidz (VP of Software Security). Their argument: organizations can't protect AI systems they can't properly categorize.

The Five Domains

The taxonomy breaks AI security into five areas that often get conflated in board discussions:

Securing AI focuses on defending AI models themselves from adversarial attacks—data poisoning, model extraction, and the growing catalog of techniques documented in MITRE ATLAS.

AI for Security flips the relationship, using AI to enhance threat detection and automate defensive capabilities. This is where most vendor marketing lives, but it's only one piece of the puzzle.

AI Governance addresses organizational oversight—who owns AI systems, what policies govern their deployment, and how decisions get documented. This domain has become critical as AI agent security emerges as a top CISO priority for 2026.

AI Safety provides guardrails on AI outputs to prevent harmful content generation and behavioral drift. Think content filters and output monitoring.

Responsible AI handles regulatory compliance—GDPR implications for AI training data, emerging AI-specific regulations, and ethical deployment standards.

Why Shared Language Matters

The authors identify a cascade of problems when organizations lack common definitions. Executive discussions devolve into buzzword exchanges. Vendor evaluations compare apples to oranges. Security strategies develop gaps because teams assume someone else owns a particular risk.

This isn't theoretical. When we covered the LangChain serialization flaw that enabled secret extraction from AI agents, the underlying issue wasn't just a code vulnerability—it was organizational confusion about who should be securing AI agent infrastructure in the first place.

Similarly, OpenAI's recent admission that prompt injection in AI browsers may never be fully solved raises governance questions that most enterprises haven't answered: Who accepts that residual risk? What compensating controls exist? Which domain does this even fall under?

The Organizational Reality

The AI agents market is projected to grow from $5.4 billion to $50.31 billion by 2030, and CISOs are scrambling to establish controls for systems that didn't exist two years ago. Current challenges include:

Identity management remains immature. AI agents need credentials and permissions, but as one analyst put it, "permissions and access rights for AI are a black box in many areas."

Governance frameworks are playing catch-up. When an AI agent autonomously isolates a subnet or revokes user privileges, questions arise about liability and whether the decision was financially defensible. Traditional GRC models built on static policies don't accommodate this reality.

Internal risk compounds external threats. Employees use both approved and unsanctioned AI tools, feeding sensitive data into prompts without adequate oversight.

Integration with Broader Frameworks

The Cisco taxonomy isn't meant to stand alone. The authors reference the Coalition for Secure AI (CoSAI), an OASIS Open Project whose founding members include Amazon, Anthropic, Google, IBM, Microsoft, NVIDIA, and OpenAI. CoSAI is developing a security-focused risk and controls taxonomy alongside Cisco's framework.

The Cisco Integrated AI Security and Safety Framework builds on this foundation, providing lifecycle-aware guidance from model development through production deployment.

For practitioners wanting implementation details, Cisco recently published work on Analytics Context Engineering that addresses how to actually feed machine data into LLMs without sacrificing accuracy or exploding context windows.

Practical Application

The value of this taxonomy lies in forcing conversations that many organizations have avoided. Consider these scenarios:

An AI chatbot gets tricked into leaking customer data. Is that a Securing AI problem (model vulnerability), AI Governance problem (deployment oversight), or AI Safety problem (output filtering)? The answer determines who leads the response and what controls get implemented.

A marketing team deploys a third-party AI tool without security review. That's clearly AI Governance, but remediating it requires AI Safety guardrails and potentially Responsible AI compliance review.

A threat actor uses AI to generate phishing emails at scale. That's an AI for Security detection challenge, but defending against it may require your own AI systems to be functioning properly—circling back to Securing AI.

What This Means for Security Teams

The immediate action is vocabulary alignment. Before your next executive presentation or vendor evaluation, ensure stakeholders agree on what they mean when they say "AI security." The five-domain model provides a starting point.

Longer term, the framework suggests organizational structure. Different teams may own different domains—AppSec for Securing AI, compliance for Responsible AI, operations for AI for Security. Mapping these relationships exposes gaps.

For those building AI security programs, the hacking news coverage of AI vulnerabilities provides a steady stream of real-world test cases. Each incident can be categorized against this taxonomy to identify which domains need strengthening.

The taxonomy won't prevent the next AI security incident. But it might ensure organizations can discuss it coherently when it happens.

Related Articles