Rust Stealer Hidden in Fake OpenAI Model Tops Hugging Face
A typosquatted OpenAI repository on Hugging Face delivered Rust-based infostealer malware to Windows users, racking up 244K downloads before removal.
A malicious repository impersonating OpenAI's "Privacy Filter" project briefly claimed the top spot on Hugging Face's trending list, accumulating 244,000 downloads before the platform removed it. Security researchers at HiddenLayer discovered the campaign on May 7, revealing a sophisticated attack that weaponized user trust in a well-known AI brand.
TL;DR
- What happened: Threat actors typosquatted OpenAI's Privacy Filter model on Hugging Face to distribute a Rust-based infostealer
- Who's affected: Windows users who downloaded and executed files from the malicious Open-OSS/privacy-filter repository
- Severity: High — complete credential theft, cryptocurrency wallet compromise, and persistent system access
- Action required: Affected users should reimage systems, rotate all credentials, and invalidate browser sessions immediately
How the Attack Worked
The attackers created a repository named Open-OSS/privacy-filter that closely mimicked OpenAI's legitimate privacy-filter release. They copied the model card nearly verbatim and included a loader.py script that appeared to contain legitimate AI-related code.
Behind the facade, the Python script performed several malicious actions: disabling SSL certificate verification, decoding a base64-encoded URL pointing to an external resource, and executing a JSON payload containing PowerShell commands. The PowerShell stage downloaded a batch file that performed privilege escalation, added the final payload to Microsoft Defender exclusions, and executed the infostealer.
This delivery mechanism mirrors tactics we covered in the DAEMON Tools supply chain attack earlier this week, where legitimate software distribution channels were similarly weaponized.
The Rust-Based Payload
The final payload is a Rust-based infostealer with extensive data harvesting capabilities, similar to the Lumma Stealer variants that have plagued enterprises this year:
- Browser data: Cookies, saved passwords, encryption keys, and session tokens from Chromium and Gecko-based browsers
- Discord: Tokens, local databases, and master encryption keys
- Cryptocurrency: Wallet files and browser extension data
- Credentials: SSH keys, FTP configurations, VPN files, and FileZilla settings
- Surveillance: Multi-monitor screenshots for reconnaissance
Stolen data gets compressed and exfiltrated to a command-and-control server at recargapopular[.]com.
The malware includes extensive anti-analysis features — checks for virtual machines, sandboxes, debuggers, and common analysis tools. These evasion techniques echo the NWHStealer campaign we reported on today, which employed similar VM detection to evade security researchers.
Inflated Metrics and Fake Engagement
HiddenLayer's analysis suggests the campaign used artificial engagement to game Hugging Face's trending algorithm. The vast majority of the 667 accounts that liked the malicious repository appear to be auto-generated, and researchers believe the 244,000 download count was artificially inflated.
This inflation served a strategic purpose: reaching the #1 trending position legitimized the repository in users' eyes. When searching for privacy-focused AI tools, victims encountered what appeared to be a wildly popular, community-endorsed project.
Why Privacy-Focused Users Were Perfect Targets
The attack exploited a cognitive blind spot. Users actively seeking privacy protection tools are more likely to accept unusual installation instructions — running a loader script for a privacy tool feels like a security feature rather than a red flag. The friction becomes expected rather than suspicious.
This represents a broader pattern in AI/ML supply chain attacks, where attackers target the very infrastructure developers trust to accelerate their work.
Connection to Broader Campaigns
Researchers identified overlaps between this campaign and npm typosquatting operations distributing the WinOS 4.0 implant. The shared infrastructure and techniques suggest either a common threat actor or, at minimum, shared resources within the underground economy.
The Lazarus group's npm/PyPI supply chain attacks demonstrate how nation-state actors have recognized developer platforms as high-value targets. While attribution for the Hugging Face campaign remains unclear, the operational sophistication suggests more than opportunistic criminals.
What Affected Users Should Do
Anyone who downloaded files from the Open-OSS/privacy-filter repository should assume full compromise:
- Reimage the affected machine — malware persistence mechanisms may survive simple cleanup
- Rotate all stored credentials — browser-saved passwords, SSH keys, API tokens
- Replace cryptocurrency wallets — generate new wallets and seed phrases on a clean system
- Invalidate browser sessions — log out of all services and revoke active tokens
- Enable MFA on accounts that support it if not already active
The Broader Supply Chain Problem
Hugging Face has removed the malicious repository, but the incident highlights systemic vulnerabilities in AI model distribution. Unlike code repositories with established security scanning, ML model platforms face unique challenges — malicious payloads can hide in model weights, configuration files, or accompanying scripts.
For organizations incorporating open-source AI models, this attack reinforces the need to verify model sources, audit accompanying code, and implement runtime monitoring for unexpected network connections. Security teams should review our guide on supply chain attacks targeting developer tools for defensive strategies. The OWASP Top 10 for LLM Applications provides guidance for securing AI pipelines, though model supply chain risks continue to outpace defensive measures.
The AI industry's rush to democratize model access created the infrastructure now being weaponized against it.
Related Articles
Infostealer Campaign Abuses Bun Runtime to Evade Detection
NWHStealer spreads via fake gaming mods and TradingView scripts, using Bun JavaScript runtime and XOR-encrypted C2 to bypass security tools.
May 9, 2026MicroStealer Targets Telecom and Education With Low Detection
New infostealer MicroStealer evades major antivirus while stealing browser credentials, crypto wallets, and Discord tokens from US and German organizations.
May 5, 2026PyPI Package With 1.1M Downloads Hijacked to Push Infostealer
Attackers compromised elementary-data version 0.23.3 on PyPI, pushing malicious code to 1.1 million monthly users. The infection extended to Docker images via automated workflows.
May 4, 2026DEEP#DOOR Backdoor Harvests Passwords, Cloud Tokens, SSH Keys
Securonix uncovers DEEP#DOOR, a Python-based backdoor that steals browser passwords, AWS/Azure credentials, and SSH keys while evading detection through bore.pub tunneling and extensive anti-analysis.
May 4, 2026