Shadow AI Drives 2x Surge in Enterprise Data Violations
Netskope report finds organizations average 223 GenAI policy incidents monthly as employees use personal accounts to access AI tools outside corporate controls.
Enterprise data policy violations linked to generative AI more than doubled in 2025, according to Netskope's Cloud and Threat Report for 2026. Organizations now record an average of 223 GenAI-related policy incidents monthly—violations that include leaking source code, regulated data, and intellectual property to AI services outside corporate control.
The surge stems from shadow AI: employees accessing GenAI tools through personal accounts that bypass security monitoring. Despite increased organizational controls, nearly half of GenAI users still route at least some activity through unmanaged services.
The Scale of Shadow AI
GenAI adoption accelerated dramatically. The number of GenAI users grew 200% over the past year, while prompt volume increased 500%. The average organization now sends around 18,000 prompts monthly to GenAI tools.
But that growth outpaced security controls. Personal account usage remains stubborn: 47% of GenAI users access tools through personal accounts, either exclusively or alongside company-approved services. That's down from 78% the prior year, but still represents a massive blind spot for security teams.
Ray Canzanese, Director of Netskope Threat Labs, put it bluntly: "Enterprise security teams exist in a constant state of change and new risks as organisations evolve and adversaries innovate. However, genAI adoption has shifted the goal posts. It represents a risk profile that has taken many teams by surprise in its scope and complexity."
What's Leaking
The top quartile of organizations sees 2,100 GenAI-linked data policy violations monthly. The sensitive material flowing into uncontrolled AI services includes:
- Source code and proprietary algorithms
- Personally identifiable information subject to privacy regulations
- Customer data and financial records
- Internal business communications
- Strategic planning documents and intellectual property
Each prompt to an external AI service potentially creates a data retention and training data concern. Terms of service vary by provider and change over time. What employees share with AI tools today may persist in ways organizations cannot predict or control.
Organizational Response
Security teams have responded with blocking: 90% of organizations now actively block at least one GenAI application, up from 80% the prior year. On average, organizations block ten different GenAI tools.
Approved-account usage has increased, with 62% of GenAI users now accessing company-approved services—up from 25% previously. But blocking alone doesn't solve the problem. Employees blocked from ChatGPT can switch to Claude, Gemini, or any of the 1,600+ GenAI applications Netskope now tracks.
The number of GenAI apps grew fivefold over the past year, from 317 to over 1,600. Yet the average number of AI apps used within an organization rose only modestly, from 6 to 8, suggesting employees concentrate activity across a few preferred tools.
The Agentic AI Horizon
The report dedicates significant attention to agentic AI—systems that execute complex, autonomous actions across internal and external resources with limited human direction.
These tools interact with APIs and systems without direct human input for each action, creating what Netskope describes as "a vast, new attack surface that necessitates a fundamental re-evaluation of security perimeters and trust models."
Enterprise experimentation with agentic AI has increased. When AI agents can autonomously move data between systems, make API calls, and execute multi-step workflows, traditional human-centric data loss prevention falls short. The agent acts on defined goals, potentially transferring or exposing data in ways that evade controls designed around human behavior patterns.
Practical Implications
For security teams, the findings suggest several priorities:
Visibility first: You can't secure what you can't see. Network-level monitoring of AI traffic provides baseline awareness, even if blocking everything proves impractical.
Approved alternatives: Employees use shadow AI because it works. Providing sanctioned tools with similar capabilities reduces the motivation to route around controls.
Data classification: Knowing what data matters most helps prioritize monitoring and response. Not every policy violation represents equal risk.
Agentic preparation: As autonomous AI tools proliferate, security architectures designed around human users need rethinking. Agent behavior differs from user behavior in ways that existing controls may miss.
The doubling of policy violations despite increased blocking reflects a fundamental tension. AI tools provide genuine productivity benefits. Employees will find ways to use them. Security's role shifts from prevention to acceptable-risk management—a harder problem without obvious solutions.
Related Articles
WEF Report: CEOs Now Fear AI-Powered Fraud More Than Ransomware
Global Cybersecurity Outlook 2026 finds executives prioritizing cyber-enabled fraud as top risk. Report warns of 'three-front war' against crime, AI misuse, and supply chain threats.
Jan 13, 2026Iran-Linked Hackers Target Middle East Officials via WhatsApp
APT42 campaign compromises government ministers, activists, and journalists through fake login pages and real-time surveillance capabilities.
Jan 18, 2026Black Basta Leader Oleg Nefedov Added to Interpol Wanted List
German and Ukrainian authorities identify 35-year-old Russian national as Black Basta boss, raid homes of two affiliates in Ukraine.
Jan 17, 2026China-Linked UAT-8837 Exploits Sitecore Zero-Day in US Attacks
Cisco Talos exposes China-nexus APT targeting critical infrastructure with CVE-2025-53690 exploitation, credential harvesting, and potential supply chain compromise.
Jan 17, 2026