PROBABLYPWNED
VulnerabilitiesApril 1, 20264 min read

Vertex AI Flaw Turns Enterprise AI Agents Into Data Thieves

Unit 42 exposes how excessive default permissions in Google Cloud's Vertex AI let attackers weaponize AI agents to steal data from customer environments.

Marcus Chen

Palo Alto Networks' Unit 42 uncovered a vulnerability in Google Cloud's Vertex AI that could turn enterprise AI agents into covert data exfiltration tools. The flaw stems from overly permissive default settings that grant AI agents far more access than they need—a classic case of convenience trumping security.

Researcher Ofir Shaty demonstrated how a misconfigured or compromised agent becomes what he called a "double agent"—appearing to serve its intended purpose while secretly accessing sensitive data across an organization's Google Cloud environment.

The Technical Problem

When organizations deploy AI agents using Vertex AI's Agent Development Kit (ADK), Google automatically provisions a Per-Project, Per-Product Service Agent (P4SA). This service account handles authentication between the agent and Google Cloud resources.

The problem: that default P4SA comes with excessive permissions. According to Unit 42's research, these credentials provide:

  • Unrestricted read access to all Google Cloud Storage buckets in the customer's project
  • Access to private Artifact Registry repositories containing Google's own container images
  • Visibility into internal Google infrastructure details

An attacker who compromises an AI agent—or tricks an organization into deploying a malicious one—inherits all these permissions. The agent can quietly exfiltrate data while performing its ostensible function.

How Exploitation Works

The attack leverages Google's metadata service, which makes credentials available to running workloads. When an attacker gains control of a Vertex AI agent, they can extract the service account credentials through standard cloud metadata queries.

Those credentials reveal the GCP project ID, agent identity, and authorized scopes. From there, the attacker can enumerate and access any storage bucket in the project—potentially containing customer data, intellectual property, or configuration files with additional secrets.

The access to Google's internal Artifact Registry repositories raises separate concerns. Researchers could download private container images that form the foundation of Vertex AI's Reasoning Engine. This visibility into Google's supply chain could help attackers identify additional vulnerabilities.

AI Agents as Attack Surface

This research highlights an emerging security challenge: AI agents operate with agency. Unlike traditional applications that execute predetermined code paths, AI agents make decisions, access resources, and take actions based on prompts and context. That flexibility creates new attack vectors.

A compromised agent might appear to function normally while performing unauthorized actions. Traditional security monitoring designed for deterministic applications may miss subtle indicators of malicious behavior.

We've previously covered AI-related security concerns, and this research reinforces that AI infrastructure requires security attention equal to its business importance.

Google's Response

Google hasn't disputed the findings. Instead, the company updated its documentation to recommend defensive configurations:

  1. Bring Your Own Service Account (BYOSA) - Replace the default P4SA with a custom service account that follows least-privilege principles
  2. Enforce principle of least privilege (PoLP) - Grant agents only the specific permissions required for their intended function
  3. Treat AI deployments as production infrastructure - Apply the same security rigor to AI agents as to any production workload

These recommendations effectively acknowledge that the default configuration isn't secure enough for production use. Organizations that deployed Vertex AI agents using defaults should review and harden their configurations.

Broader Implications for AI Security

The "double agent" framing captures something important about AI security risks. These systems aren't passive tools—they're autonomous actors with access to resources. Security models built for traditional software don't fully account for that agency.

Key considerations for organizations deploying AI agents:

  • Assume compromise - Design permissions assuming any agent could be malicious or compromised
  • Minimize access - AI agents don't need access to everything in your cloud environment
  • Monitor behavior - Watch for unusual data access patterns, even from trusted agents
  • Audit regularly - Review what permissions your AI infrastructure actually has versus what it needs

The rush to deploy AI capabilities often outpaces security consideration. This vulnerability is a reminder that AI infrastructure carries the same risks as any other software—plus new risks unique to autonomous systems.

What Defenders Should Do

If you're running Vertex AI agents on Google Cloud:

  1. Inventory existing agents and their associated service accounts
  2. Review current permissions granted to P4SA accounts
  3. Migrate to BYOSA with explicitly scoped permissions
  4. Enable Cloud Audit Logs to track agent activity
  5. Implement network controls limiting agent access to necessary resources

For organizations evaluating AI platforms, this research provides a template for security assessment. Ask vendors about default permission models, service account configurations, and monitoring capabilities. The answer "we use reasonable defaults" isn't good enough.

The AI gold rush continues, but security teams need to ensure the rush doesn't create security debt that takes years to resolve. Least privilege isn't a new concept—it just applies to AI agents too.

Related Articles