Cisco AI Security Report: 83% Want Agents, 29% Ready
Cisco's State of AI Security 2026 report reveals a dangerous gap between agentic AI adoption ambitions and enterprise security readiness. Here's what the threat landscape looks like.
Cisco released its State of AI Security 2026 report today, mapping an AI threat landscape that's evolving faster than most organizations can defend against. The headline finding: 83 percent of organizations plan to deploy agentic AI capabilities, but only 29 percent believe they're prepared to do it securely.
That 54-point gap between ambition and readiness defines the current AI security crisis.
The Readiness Problem
The report, authored by Cisco's Emile Antone and threat research lead Amy Chang, describes what they call a "major paradigm shift in AI security." The shift isn't hypothetical—it's already underway. The second half of 2025 saw AI-specific exploits move from research papers into active campaigns, with prompt injection and jailbreaking techniques appearing in real-world attacks.
Shadow AI makes the problem worse. Employees are using personal AI accounts that bypass corporate visibility entirely, a pattern we detailed in coverage of Netskope's findings on shadow AI data violations. When nearly half of GenAI users access tools through unmanaged services, enterprise security teams are flying blind.
The scale of the external AI ecosystem compounds the risk. Cisco's report cites over 2 million models and 500,000 datasets on Hugging Face alone—a sprawling third-party landscape that most organizations have no way to audit.
Four Threat Categories to Watch
The report organizes the AI threat landscape into four primary attack classes:
Prompt Injection and Jailbreaks remain the most immediate concern. Attackers have refined techniques for manipulating AI systems into bypassing safety controls or leaking sensitive data. Cisco notes these attacks have graduated from theoretical proofs-of-concept to production-grade exploits. We covered one of Cisco's own testing tools for this attack class in our piece on GPT-OSS-Safeguard's multi-turn jailbreak testing.
Supply Chain Vulnerabilities target the models, datasets, training pipelines, and open-source components that underpin enterprise AI. When a poisoned dataset or backdoored model gets pulled into production, the compromise is baked into every inference the system makes.
Agentic AI Risks represent the newest and potentially most dangerous category. Unlike chatbots that just generate text, AI agents can execute actions—booking travel, processing transactions, modifying infrastructure. As Cisco's Amy Chang puts it: "An AI chatbot can be manipulated into saying something harmful; an AI agent can be manipulated into doing something harmful."
The Model Context Protocol (MCP), which enables agents to interact with external tools, introduces specific vulnerabilities that attackers are already probing. Cisco released open-source MCP scanners alongside the report to help organizations identify risky tool configurations—tools we covered in yesterday's DevNet AI Repos Catalog announcement.
AI-Enabled Attack Campaigns flip the script, examining how adversaries weaponize AI rather than attack it. Nation-state actors and cybercriminal groups are integrating generative AI into reconnaissance, phishing, and malware development workflows.
The Policy Landscape Diverges
The report dedicates significant attention to how different governments are approaching AI regulation:
United States maintains an innovation-focused, regulatory-light stance, relying primarily on voluntary commitments and sector-specific guidance rather than binding rules.
European Union continues building on the AI Act but has simplified compliance requirements while directing public funding toward domestic AI development.
China pursues a dual-track strategy that integrates AI expansion into state planning while implementing sophisticated risk management frameworks that prioritize social stability.
For multinational organizations, this fragmented regulatory environment means a single AI deployment may face three incompatible compliance regimes.
Cisco's Open-Source Response
The report arrives alongside four new open-source security tools:
- A structure-aware pickle fuzzer for generating adversarial files that test model loading routines
- Scanners for MCP, A2A (agent-to-agent), and agentic skill files that identify supply chain risks
- AI Defense integrations for runtime monitoring
These tools reflect Cisco's broader bet that enterprises need AI-native security controls rather than retrofitted solutions. The Cisco AI Defense platform announced at last month's AI Summit provides the commercial implementation.
Why This Matters
The gap between AI adoption and security maturity isn't sustainable. Gartner projects that by late 2026, more than 80 percent of enterprises will have generative AI in production, while fewer than a third will have mature governance frameworks. That mismatch creates systemic risk across industries.
Cisco's report is ultimately a warning: organizations racing to deploy AI agents without addressing the security fundamentals—supply chain validation, runtime monitoring, prompt injection defenses—are building on foundations that attackers are already learning to exploit.
The 29 percent who feel prepared have work to do. The 71 percent who don't have even more. For deeper reading on AI security fundamentals and enterprise defense strategies, check our guides section.
Related Articles
AIUC-1 Becomes First Standard for Securing AI Agents
Cisco helps build AIUC-1, the first AI agent security standard, mapping its AI Security Framework to testable controls for prompt injection, jailbreaks, and more.
Feb 6, 2026Cisco DevNet Launches AI Repos Catalog for MCP Servers
New catalog at developer.cisco.com/codeexchange/ai centralizes AI agents and MCP servers for network automation, with built-in testing tools.
Feb 18, 2026Cisco AI Summit: Security Takes Center Stage
Cisco's second AI Summit unveiled AI Defense, AgenticOps, and Silicon One P200. Here's what security teams need to know about agentic AI governance.
Feb 6, 2026Talos Warns AI Adoption Is Outrunning Security
Cisco Talos sounds the alarm on AI tools that demand root access and store credentials in plaintext, calling the current adoption frenzy a security crisis.
Feb 5, 2026