OpenLIT GitHub Actions Flaw Exposes API Keys, Cloud Credentials
CVE-2026-27941 (CVSS 9.9) lets attackers execute code via pull requests to OpenLIT, stealing GITHUB_TOKEN and cloud secrets. Patch to 1.37.1 now.
A critical vulnerability in OpenLIT, an open-source observability platform for AI engineering, allows attackers to execute arbitrary code and steal sensitive secrets by submitting a malicious pull request. CVE-2026-27941 carries a CVSS score of 9.9 and was publicly disclosed today.
The flaw stems from improper use of GitHub Actions' pull_request_target event in OpenLIT's CI/CD workflows. This configuration mistake gives external contributors—including attackers—access to the main repository's security context, not their own fork's limited permissions.
What's Exposed
An attacker exploiting CVE-2026-27941 gains access to:
- Write-privileged
GITHUB_TOKEN(can push code, modify releases) - API keys for various services
- Database and vector store connection tokens
- Google Cloud service account credentials
These secrets enable follow-on attacks ranging from supply chain compromise to lateral movement into connected cloud infrastructure. For organizations using OpenLIT to monitor their AI/ML pipelines, the exposed credentials could provide direct access to production systems.
How the Attack Works
The vulnerable workflows use pull_request_target incorrectly, checking out and executing untrusted code from forked repositories. Here's the attack flow:
- Attacker forks the OpenLIT repository
- Modifies workflow files or injects malicious code
- Opens a pull request to the main project
- The workflow triggers with elevated privileges, executing the attacker's payload
- Secrets are exfiltrated to attacker-controlled infrastructure
No special permissions are required—anyone who can open a pull request can trigger the vulnerability. The attack requires minimal sophistication, making it accessible to opportunistic threat actors.
This type of CI/CD misconfiguration has become increasingly common as organizations adopt GitHub Actions without fully understanding its security model. We've seen similar supply chain concerns with malicious repository content targeting developers through trusted platforms.
Why pull_request_target Is Dangerous
GitHub's pull_request_target event was designed for specific use cases like labeling PRs or adding comments—operations that need write access but shouldn't execute untrusted code. When workflows checkout PR code and run it, they effectively give external contributors the same privileges as repository maintainers.
The OpenLIT workflows violated this principle by executing code from pull request branches while running in the base repository's context. This pattern appears in countless repositories across GitHub, often copied from templates without understanding the security implications.
Affected Versions and Remediation
All OpenLIT versions prior to 1.37.1 are vulnerable. Organizations should:
- Upgrade immediately to version 1.37.1 or later
- Audit CI/CD logs for suspicious workflow executions
- Rotate exposed secrets if you suspect compromise
- Review GitHub Actions for similar
pull_request_targetmisconfigurations
No proof-of-concept exploit has been published yet, but the attack is straightforward enough that weaponization should be assumed imminent.
Broader Implications for AI/ML Security
OpenLIT is used to monitor LLM applications, trace AI agent behavior, and collect telemetry from production AI systems. The exposed Google Cloud service account key is particularly concerning—it likely has access to compute resources, storage buckets, and potentially the AI models themselves.
This vulnerability highlights a growing attack surface in the AI/ML ecosystem. As organizations race to deploy generative AI applications, they're adopting new tooling—often open-source—without adequate security review. The intersection of AI engineering platforms and CI/CD infrastructure creates opportunities for attackers to compromise both development pipelines and production systems simultaneously.
For teams building AI applications, this serves as a reminder that observability and monitoring tools have privileged access by design. They need the same security scrutiny as the applications they monitor. Reviewing the security posture of AI agent configuration and tooling has become essential as these platforms expand.
Detection and Response
Organizations running OpenLIT should check GitHub Actions logs for:
- Unexpected workflow runs triggered by external PRs
- Workflows that checkout code from untrusted branches
- Network connections to unfamiliar destinations during CI runs
If you identify suspicious activity, treat it as a confirmed breach. The exposed credentials provide persistent access that survives workflow termination.
Related Articles
Claude Code Flaws Let Malicious Repos Steal API Keys, Run Code
Check Point found CVE-2025-59536 and CVE-2026-21852 in Anthropic's Claude Code. Opening a cloned repo could execute code and leak API credentials.
Feb 26, 2026jsPDF Flaw Lets Attackers Embed Local Files in PDFs
CVE-2025-68428 enables path traversal in the popular JavaScript PDF library, allowing attackers to read arbitrary files from Node.js servers and exfiltrate them via generated documents.
Jan 9, 2026CISA Warns of Asus Live Update Supply Chain Backdoor Under Active Attack
CVE-2025-59374 exploits compromised ASUS software distribution to deploy backdoors on consumer and enterprise systems worldwide.
Dec 18, 2025Cisco SD-WAN Zero-Day Exploited Since 2023 Prompts CISA Alert
CVE-2026-20127 gives attackers full admin access to Cisco SD-WAN infrastructure. CISA emergency directive requires federal patches by Feb 27.
Feb 25, 2026