PROBABLYPWNED
AnnouncementsFebruary 5, 20265 min read

Talos Warns AI Adoption Is Outrunning Security

Cisco Talos sounds the alarm on AI tools that demand root access and store credentials in plaintext, calling the current adoption frenzy a security crisis.

ProbablyPwned Team

Cisco Talos researcher Joe Marshall didn't mince words in a blog post published today: organizations are handing AI tools the keys to their kingdoms and barely stopping to read the terms of service. The piece, titled "All gas, no brakes: Time to come to AI church," reads like a warning sermon for an industry drunk on automation.

Marshall's central argument is blunt — security is being sacrificed for convenience as AI adoption outpaces any reasonable ability to secure it. And the examples he cites aren't hypothetical. They're already causing damage.

OpenClaw's Plaintext Problem

The primary exhibit is OpenClaw (also called Clawdbot or Moltbot), an open-source agentic AI application that has racked up 157,000 GitHub stars. To function, OpenClaw asks users to surrender login credentials, passwords, and API keys. That data gets stored in plaintext files on the host machine.

If that sounds familiar, it should. We've already covered how 341 malicious OpenClaw Skills were distributing Atomic Stealer through the ClawHub marketplace, and how defenders are scrambling to build monitoring tools to track what the agent is actually doing on their networks.

What makes OpenClaw particularly dangerous is its "Skills" feature — a plugin system that lets third-party code execute with root or administrator privileges. Marshall notes that over 341 malicious extensions have already been identified, but the deeper issue is architectural. The tool was designed for convenience first, and security wasn't in the blueprint.

The "Install First, Ask Questions Later" Pattern

Marshall calls out a broader pattern that extends well beyond OpenClaw. He points to the launch of OpenAI's Atlas browser tool, which arrived with well-documented prompt injection vulnerabilities that the company itself acknowledged may never be fully solved. The pattern repeats across the industry: ship the product, accumulate users, worry about security later.

The numbers back him up. Microsoft's 2026 Data Security Index found that 32% of surveyed organizations' data security incidents now involve generative AI tools — yet only 47% of security leaders have implemented AI-specific controls. The gap between deployment speed and security readiness isn't closing. It's widening.

A Gravitee report on AI agent security put finer numbers on the problem: 88% of organizations reported confirmed or suspected AI agent security incidents in the past year, while only 14.4% have full security approval for their agent deployments. That means roughly six out of seven organizations deploying AI agents are doing so without their security teams signing off.

Credential Theft Is the Real Risk

What makes agentic AI tools different from traditional software is the breadth of access they require. An AI agent that can browse the web, write code, access databases, and send emails needs OAuth tokens, API keys, and session credentials to function. When those credentials sit in plaintext on a host machine — as they do with OpenClaw — a single compromise exposes everything the agent has ever touched.

This isn't a theoretical attack surface. Cisco's own research during Cisco Live Amsterdam this week framed AI security as a five-domain problem requiring governance across model integrity, data protection, access control, supply chain security, and operational monitoring. Marshall's blog post serves as the practical illustration of why that framework matters.

Browser extensions have proven to be an equally ripe attack vector in the AI space. Researchers found Chrome extensions stealing ChatGPT and DeepSeek conversations from 900,000 users — a reminder that AI tools create new exfiltration paths that traditional endpoint security wasn't designed to catch.

DKnife: A Parallel Threat

Marshall's post also highlights DKnife, a modular Linux attack framework targeting routers and edge devices that has operated since at least 2019. DKnife intercepts network traffic, steals credentials, bypasses endpoint security, and hijacks software updates. It's a different problem than AI tool insecurity, but the through-line is the same: organizations aren't securing their infrastructure before piling new technology on top of it.

The recommended mitigations for DKnife — hardening router and gateway security, auditing for unauthorized firmware, enforcing strong authentication, and implementing network segmentation — are standard advice that too many organizations still haven't followed.

What Organizations Should Do

Marshall's recommendations boil down to treating AI tools with the same scrutiny applied to any other enterprise software deployment. That means:

  • Audit before adoption. Review what credentials an AI tool requires, how it stores them, and what privileges it needs. If a tool demands root access and stores secrets in plaintext, that's a dealbreaker.
  • Vet third-party extensions. The ClawHub marketplace proved that supply chain attacks against AI plugins are already happening at scale. Don't install unverified Skills or plugins.
  • Apply network segmentation. AI agents shouldn't have unrestricted access to internal resources. Segment them the way you'd segment any untrusted endpoint.
  • Monitor agent behavior. Log what AI agents access, when, and why. If you don't have visibility into agent activity, you can't detect abuse.

The Talos blog post reads less like a technical advisory and more like a reality check. The AI industry is moving fast, and security teams are being told to keep up or get out of the way. Marshall's message is that keeping up shouldn't mean giving up on basic security hygiene. A tool with 157,000 GitHub stars can still be a liability if it stores your API keys next to your grocery list.

For organizations navigating AI adoption decisions, our hacking news coverage tracks the evolving threat surface as it develops.

Related Articles