PROBABLYPWNED
VulnerabilitiesFebruary 26, 20264 min read

OpenLIT GitHub Actions Flaw Exposes API Keys, Cloud Credentials

CVE-2026-27941 (CVSS 9.9) lets attackers execute code via pull requests to OpenLIT, stealing GITHUB_TOKEN and cloud secrets. Patch to 1.37.1 now.

Marcus Chen

A critical vulnerability in OpenLIT, an open-source observability platform for AI engineering, allows attackers to execute arbitrary code and steal sensitive secrets by submitting a malicious pull request. CVE-2026-27941 carries a CVSS score of 9.9 and was publicly disclosed today.

The flaw stems from improper use of GitHub Actions' pull_request_target event in OpenLIT's CI/CD workflows. This configuration mistake gives external contributors—including attackers—access to the main repository's security context, not their own fork's limited permissions.

What's Exposed

An attacker exploiting CVE-2026-27941 gains access to:

  • Write-privileged GITHUB_TOKEN (can push code, modify releases)
  • API keys for various services
  • Database and vector store connection tokens
  • Google Cloud service account credentials

These secrets enable follow-on attacks ranging from supply chain compromise to lateral movement into connected cloud infrastructure. For organizations using OpenLIT to monitor their AI/ML pipelines, the exposed credentials could provide direct access to production systems.

How the Attack Works

The vulnerable workflows use pull_request_target incorrectly, checking out and executing untrusted code from forked repositories. Here's the attack flow:

  1. Attacker forks the OpenLIT repository
  2. Modifies workflow files or injects malicious code
  3. Opens a pull request to the main project
  4. The workflow triggers with elevated privileges, executing the attacker's payload
  5. Secrets are exfiltrated to attacker-controlled infrastructure

No special permissions are required—anyone who can open a pull request can trigger the vulnerability. The attack requires minimal sophistication, making it accessible to opportunistic threat actors.

This type of CI/CD misconfiguration has become increasingly common as organizations adopt GitHub Actions without fully understanding its security model. We've seen similar supply chain concerns with malicious repository content targeting developers through trusted platforms.

Why pull_request_target Is Dangerous

GitHub's pull_request_target event was designed for specific use cases like labeling PRs or adding comments—operations that need write access but shouldn't execute untrusted code. When workflows checkout PR code and run it, they effectively give external contributors the same privileges as repository maintainers.

The OpenLIT workflows violated this principle by executing code from pull request branches while running in the base repository's context. This pattern appears in countless repositories across GitHub, often copied from templates without understanding the security implications.

Affected Versions and Remediation

All OpenLIT versions prior to 1.37.1 are vulnerable. Organizations should:

  1. Upgrade immediately to version 1.37.1 or later
  2. Audit CI/CD logs for suspicious workflow executions
  3. Rotate exposed secrets if you suspect compromise
  4. Review GitHub Actions for similar pull_request_target misconfigurations

No proof-of-concept exploit has been published yet, but the attack is straightforward enough that weaponization should be assumed imminent.

Broader Implications for AI/ML Security

OpenLIT is used to monitor LLM applications, trace AI agent behavior, and collect telemetry from production AI systems. The exposed Google Cloud service account key is particularly concerning—it likely has access to compute resources, storage buckets, and potentially the AI models themselves.

This vulnerability highlights a growing attack surface in the AI/ML ecosystem. As organizations race to deploy generative AI applications, they're adopting new tooling—often open-source—without adequate security review. The intersection of AI engineering platforms and CI/CD infrastructure creates opportunities for attackers to compromise both development pipelines and production systems simultaneously.

For teams building AI applications, this serves as a reminder that observability and monitoring tools have privileged access by design. They need the same security scrutiny as the applications they monitor. Reviewing the security posture of AI agent configuration and tooling has become essential as these platforms expand.

Detection and Response

Organizations running OpenLIT should check GitHub Actions logs for:

  • Unexpected workflow runs triggered by external PRs
  • Workflows that checkout code from untrusted branches
  • Network connections to unfamiliar destinations during CI runs

If you identify suspicious activity, treat it as a confirmed breach. The exposed credentials provide persistent access that survives workflow termination.

Related Articles