PROBABLYPWNED
Threat IntelligenceApril 15, 20264 min read

Pushpaganda: AI-Generated Fake News Hijacks Google Discover

HUMAN Security exposes Pushpaganda campaign using AI content to poison Google Discover feeds, generating 240 million fraudulent ad requests through scareware and fake news.

Alex Kowalski

Security researchers at HUMAN's Satori team uncovered a sophisticated ad fraud operation that weaponizes AI-generated content to infiltrate Google Discover feeds. The campaign, dubbed Pushpaganda, peaked at 240 million fraudulent bid requests over a single seven-day period, exploiting how Google's personalized content system surfaces news to Android and Chrome users.

How Pushpaganda Works

The attack chain combines search engine poisoning with social engineering:

  1. AI content generation - Attackers create fake news articles using large language models, designed to manipulate search ranking signals
  2. Google Discover infiltration - Poisoned content surfaces in users' personalized Discover feeds on Android and Chrome
  3. Notification coercion - When victims click through to attacker-controlled domains, they're pressured into enabling browser push notifications
  4. Scareware delivery - Enabled notifications deliver fake legal threats, virus warnings, and other alarming messages
  5. Traffic monetization - Clicking notifications redirects users through ad networks, generating fraudulent revenue

At its height, 113 domains were linked to the campaign, collectively generating a quarter-billion ad requests weekly.

Geographic Spread

Pushpaganda initially targeted India but expanded rapidly to the United States, Australia, Canada, South Africa, and the United Kingdom. The global spread suggests either multiple threat actors adopted the technique or the original operators scaled their infrastructure.

The campaign exploits a fundamental tension in Google Discover: the system prioritizes engaging, novel content—exactly what AI-generated clickbait is optimized to produce.

The AI Content Problem

Google Discover has become a significant traffic driver for news publishers, but its algorithmic curation creates opportunities for manipulation. Unlike search, where users express explicit intent, Discover pushes content proactively. Users trust what appears in their feed.

AI-generated fake news exploits this trust. The articles don't need to fool journalists or fact-checkers—they just need to trigger engagement signals that boost Discover ranking. Sensational headlines, emotional hooks, and controversial framing all game the algorithm.

This isn't the first time AI-powered attacks have leveraged automation at scale. What's different is the target: rather than directly compromising systems, Pushpaganda hijacks Google's content recommendation infrastructure to deliver scareware.

Scareware Tactics

The push notifications delivered by Pushpaganda use psychological pressure to drive clicks:

  • Fake legal threats - "Final warning: Your device has been flagged for illegal content"
  • False virus alerts - "URGENT: 47 viruses detected on your device"
  • Authority impersonation - Messages styled to look like law enforcement or government notices
  • Countdown timers - Artificial urgency pushing victims to act immediately

Each click generates ad revenue through embedded advertisements on the destination scam sites. Victims may also be pushed toward fake tech support, subscription traps, or malware downloads.

Google's Response

Google confirmed it "launched a fix for the spam issue in question" before receiving the research report, emphasizing ongoing work on "spam-fighting systems and policies against emerging forms of low quality, manipulative content."

The company didn't specify what the fix entails—likely a combination of ranking adjustments, domain blocking, and detection improvements for AI-generated content.

Detection and Protection

For organizations concerned about ad fraud or employees falling for push notification scams:

  1. Block notification prompts - Enterprise browsers can disable or restrict push notification requests
  2. Monitor DNS traffic - The 113 identified domains provide IOCs for network-level blocking
  3. User awareness training - Teach employees that legitimate services never demand immediate action through browser notifications
  4. Review Discover settings - Users can disable Google Discover or limit its sources

Why This Matters

Pushpaganda represents the intersection of AI content generation and advertising fraud—two trends that security teams increasingly need to address. The campaign demonstrates that AI doesn't just enable more sophisticated attacks; it enables more attacks, period.

For publishers, the damage extends beyond fraud. When fake content floods Discover, it erodes trust in legitimate news surfaced through the same system. The ShinyHunters group isn't the only threat actor monetizing stolen attention.

HUMAN's Satori team continues monitoring for Pushpaganda variants. Given the campaign's success before detection, expect copycats deploying similar tactics across other content recommendation systems.

Related Articles