OpenAI Report Shows 8x Enterprise AI Growth—Networks Are the Bottleneck
OpenAI's State of Enterprise AI shows 8x adoption growth and 320x reasoning usage. Cisco explains why your network architecture probably can't keep up.
OpenAI's State of Enterprise AI report dropped with numbers that should make every network architect reassess their infrastructure roadmap. The headline: enterprise AI usage grew 8x year-over-year, with advanced reasoning model consumption jumping an eye-popping 320x. Over one million businesses now run OpenAI tools in production, and workers report saving 40-60 minutes daily.
Those gains come with a catch. Cisco's networking team published an analysis this week warning that most enterprise networks were designed for a different era—and the AI-driven traffic patterns flooding corporate infrastructure weren't in the original blueprint.
The Traffic Patterns AI Creates
Cisco's Sanjay Kapoor laid out the problem by mapping six AI use cases to their network implications. The patterns share a common thread: continuous, latency-sensitive, machine-to-machine traffic that traditional enterprise architectures weren't built to handle.
AI-powered customer support introduces voice-based workloads where latency directly affects customer experience. The network isn't just moving packets—it's part of the product.
AI-assisted development creates what Cisco calls "dense, continuous east-west traffic" between developers, code repositories, CI/CD pipelines, and cloud services. Most enterprise networks optimized for north-south traffic flows—users accessing external resources—not the constant internal chatter that AI-augmented workflows generate.
AI-driven analytics produce bursty workloads with sudden demand spikes. Networks need to absorb those peaks without degradation, which means capacity planning based on averages will fail.
Agentic AI workflows—autonomous systems that execute multi-step tasks across identity providers, APIs, and SaaS platforms—turn the network into part of the execution path. When an agent orchestrates a workflow spanning finance, HR, and customer systems, network performance becomes business performance. This also explains why organizations are racing to deploy governance frameworks like Cisco's AgenticOps before agentic systems proliferate beyond IT's ability to secure them.
The Readiness Gap Is Real
OpenAI's report found that 75% of enterprise workers now complete tasks they previously couldn't perform at all—coding, data analysis, custom workflow automation. That capability expansion drives demand growth, and demand growth exposes infrastructure limits.
Research from Broadcom put numbers to the problem: while 99% of organizations have cloud strategies incorporating AI, only 49% believe their networks can handle the bandwidth and latency those workloads require. The top pain points include network congestion (46%), insufficient visibility into network behavior (39%), and latency (37%).
A separate Deloitte analysis warned that the shift from episodic AI experiments to always-on production systems is widening the gap between executive expectations and infrastructure reality. Applications requiring response times under 10 milliseconds can't tolerate cloud-based processing latency—they need edge infrastructure that most organizations haven't deployed.
We've covered this readiness problem before. NetOp AI's network assessment tool was specifically designed to identify infrastructure gaps blocking AI deployments, flagging end-of-life hardware, configuration drift, and capacity limitations before they derail production workloads.
What High Performers Look Like
OpenAI's data reveals a widening gap between organizations that have figured out AI integration and those still experimenting. Frontier firms—those in the 95th percentile of usage—generate roughly twice as many messages per seat as median enterprises and 7x more messages to custom GPTs. That intensity correlates with business outcomes: AI leaders showed 1.7x higher revenue growth, 3.6x greater total shareholder return, and 1.6x higher EBIT margins.
The productivity numbers are similarly stark. Engineering teams at high-adoption firms report 73% faster code delivery. IT departments resolve issues 87% faster. Marketing executes campaigns 85% faster. These aren't marginal improvements.
The security implications of that adoption speed have been a recurring theme in Talos research, where analysts warned that organizations are sacrificing security controls for AI convenience. A recent Gravitee report found that 88% of organizations reported confirmed or suspected AI agent security incidents in the past year, while only 14% have full security approval for their agent deployments.
Cisco's Prescription
Cisco's answer involves what it calls the AI-Ready Secure Network Architecture, built on three pillars: infrastructure optimized for real-time AI workloads, security embedded rather than bolted on, and autonomous operations through AgenticOps to reduce complexity.
The AgenticOps framework—unveiled at Cisco's AI Summit earlier this month—lets software agents detect network problems, recommend fixes, and execute changes while keeping humans in the approval loop. The Deep Network Model powering it draws on 40 years of Cisco's networking expertise, including CCIE training materials.
The underlying hardware story matters too. Cisco's Silicon One P200 chip delivers 51.2 terabits per second of throughput on a single device, designed for the datacenter fabric that large AI models demand. The company expects its AI infrastructure business to hit roughly $3 billion in revenue this year.
Why This Matters for Security Teams
Network changes driven by AI workloads create security implications that defenders can't ignore. Every edge deployment needed for low-latency inference expands the attack surface. Every agent workflow that spans multiple SaaS platforms creates identity management challenges that existing IAM tools weren't designed to address.
By 2028, Deloitte projects that 75% of enterprise AI workloads will run on hybrid infrastructure combining on-premises and cloud components. That hybrid footprint complicates visibility, access control, and incident response.
Cisco's framing captures the stakes: "In the AI era, the enterprise network doesn't just support the business—it enables it." For security teams, that means the network security strategy and the AI governance strategy are increasingly the same conversation.
The organizations pulling ahead aren't just adopting AI faster. They're investing in the infrastructure that lets them adopt it securely. Everyone else is accumulating technical debt they'll pay down later—assuming their networks survive the load.
Related Articles
Cisco AI Summit: Security Takes Center Stage
Cisco's second AI Summit unveiled AI Defense, AgenticOps, and Silicon One P200. Here's what security teams need to know about agentic AI governance.
Feb 6, 2026Cisco Secure AI Factory with NVIDIA: Partner Revenue at Scale
Cisco 360 Partner Program offers new AI specializations and certifications tied to NVIDIA partnership, with $267B in projected partner-delivered AI services by 2030.
Feb 19, 2026Cisco AI Security Report: 83% Want Agents, 29% Ready
Cisco's State of AI Security 2026 report reveals a dangerous gap between agentic AI adoption ambitions and enterprise security readiness. Here's what the threat landscape looks like.
Feb 19, 2026Cisco DevNet Launches AI Repos Catalog for MCP Servers
New catalog at developer.cisco.com/codeexchange/ai centralizes AI agents and MCP servers for network automation, with built-in testing tools.
Feb 18, 2026