Pushpaganda: AI-Generated Fake News Hijacks Google Discover
HUMAN Security exposes Pushpaganda campaign using AI content to poison Google Discover feeds, generating 240 million fraudulent ad requests through scareware and fake news.
Security researchers at HUMAN's Satori team uncovered a sophisticated ad fraud operation that weaponizes AI-generated content to infiltrate Google Discover feeds. The campaign, dubbed Pushpaganda, peaked at 240 million fraudulent bid requests over a single seven-day period, exploiting how Google's personalized content system surfaces news to Android and Chrome users.
How Pushpaganda Works
The attack chain combines search engine poisoning with social engineering:
- AI content generation - Attackers create fake news articles using large language models, designed to manipulate search ranking signals
- Google Discover infiltration - Poisoned content surfaces in users' personalized Discover feeds on Android and Chrome
- Notification coercion - When victims click through to attacker-controlled domains, they're pressured into enabling browser push notifications
- Scareware delivery - Enabled notifications deliver fake legal threats, virus warnings, and other alarming messages
- Traffic monetization - Clicking notifications redirects users through ad networks, generating fraudulent revenue
At its height, 113 domains were linked to the campaign, collectively generating a quarter-billion ad requests weekly.
Geographic Spread
Pushpaganda initially targeted India but expanded rapidly to the United States, Australia, Canada, South Africa, and the United Kingdom. The global spread suggests either multiple threat actors adopted the technique or the original operators scaled their infrastructure.
The campaign exploits a fundamental tension in Google Discover: the system prioritizes engaging, novel content—exactly what AI-generated clickbait is optimized to produce.
The AI Content Problem
Google Discover has become a significant traffic driver for news publishers, but its algorithmic curation creates opportunities for manipulation. Unlike search, where users express explicit intent, Discover pushes content proactively. Users trust what appears in their feed.
AI-generated fake news exploits this trust. The articles don't need to fool journalists or fact-checkers—they just need to trigger engagement signals that boost Discover ranking. Sensational headlines, emotional hooks, and controversial framing all game the algorithm.
This isn't the first time AI-powered attacks have leveraged automation at scale. What's different is the target: rather than directly compromising systems, Pushpaganda hijacks Google's content recommendation infrastructure to deliver scareware.
Scareware Tactics
The push notifications delivered by Pushpaganda use psychological pressure to drive clicks:
- Fake legal threats - "Final warning: Your device has been flagged for illegal content"
- False virus alerts - "URGENT: 47 viruses detected on your device"
- Authority impersonation - Messages styled to look like law enforcement or government notices
- Countdown timers - Artificial urgency pushing victims to act immediately
Each click generates ad revenue through embedded advertisements on the destination scam sites. Victims may also be pushed toward fake tech support, subscription traps, or malware downloads.
Google's Response
Google confirmed it "launched a fix for the spam issue in question" before receiving the research report, emphasizing ongoing work on "spam-fighting systems and policies against emerging forms of low quality, manipulative content."
The company didn't specify what the fix entails—likely a combination of ranking adjustments, domain blocking, and detection improvements for AI-generated content.
Detection and Protection
For organizations concerned about ad fraud or employees falling for push notification scams:
- Block notification prompts - Enterprise browsers can disable or restrict push notification requests
- Monitor DNS traffic - The 113 identified domains provide IOCs for network-level blocking
- User awareness training - Teach employees that legitimate services never demand immediate action through browser notifications
- Review Discover settings - Users can disable Google Discover or limit its sources
Why This Matters
Pushpaganda represents the intersection of AI content generation and advertising fraud—two trends that security teams increasingly need to address. The campaign demonstrates that AI doesn't just enable more sophisticated attacks; it enables more attacks, period.
For publishers, the damage extends beyond fraud. When fake content floods Discover, it erodes trust in legitimate news surfaced through the same system. The ShinyHunters group isn't the only threat actor monetizing stolen attention.
HUMAN's Satori team continues monitoring for Pushpaganda variants. Given the campaign's success before detection, expect copycats deploying similar tactics across other content recommendation systems.
Related Articles
FBI Seizes W3LL Phishing Kit, Developer Arrested in Indonesia
Joint FBI-Indonesian operation dismantles W3LL phishing platform behind $20M in fraud attempts. Developer arrested after 25,000+ stolen accounts sold since 2019.
Apr 13, 2026FBI: Cybercrime Losses Hit $20.9B in 2025, Up 26%
FBI IC3 2025 report reveals record $20.9 billion in cybercrime losses. Investment fraud tops $8.6B, cryptocurrency scams reach $11.4B, and ransomware losses surge 259%.
Apr 13, 2026UNC6783 Targets BPOs to Breach Adobe, Dozens of Enterprises
Google warns of UNC6783 threat actor using Okta and Zendesk phishing to breach BPO providers, stealing 13M Adobe support tickets and bug bounty data. FIDO2 keys recommended.
Apr 13, 2026Storm-2755 Steals Canadian Paychecks via SEO Poisoning
Microsoft tracks Storm-2755 'Payroll Pirate' using poisoned search results and AiTM phishing to hijack Canadian employee direct deposits. HR systems compromised.
Apr 12, 2026