Curl Ends Bug Bounty Program After AI Slop Floods Queue
The ubiquitous command-line tool will stop accepting HackerOne submissions January 31. After $86K paid across 78 vulnerabilities, AI-generated noise made the program unsustainable.
The curl project will shut down its HackerOne bug bounty program at the end of this month after six years and $86,000 paid across 78 confirmed vulnerabilities. The reason: an unsustainable flood of AI-generated vulnerability reports that waste volunteer time without producing valid findings.
Daniel Stenberg, curl's founder and lead developer, announced the decision this week. "The main goal with shutting down the bounty is to remove the incentive for people to submit crap and non-well researched reports to us," he explained.
The AI Report Problem
In just the first 21 days of 2026, the curl security team received 20 AI-generated bug reports. Seven arrived within a single sixteen-hour period. None described actual vulnerabilities. Each required significant time from volunteer maintainers to properly assess and dismiss.
The reports follow a recognizable pattern: they sound technically plausible but lack substance. They reference code paths that don't exist, describe vulnerabilities that can't be exploited, or misunderstand how curl actually works. The AI generates confident-sounding security analysis that falls apart under examination.
For a project maintained largely by volunteers, triaging these reports consumes resources better spent on actual development and security work.
What Changes
Starting February 1, 2026, the curl project will no longer accept new HackerOne submissions. Researchers who want to report security issues should use GitHub instead.
Any reports already in progress at month's end will continue being processed. The project isn't abandoning responsible disclosure—just the bounty platform that's become a target for low-effort submissions.
Curl's updated documentation will state that the project no longer offers rewards for reported bugs or vulnerabilities and won't help researchers obtain compensation from third parties. Stenberg plans to publish a detailed blog post explaining the changes.
For submitters of what the project calls "crap" reports, the new security.txt warns of public ridicule and bans. The message is clear: serious researchers remain welcome; automated noise generators do not.
Industry Implications
Curl is everywhere. The library handles HTTP requests for countless applications, operating systems, and devices. Its security matters—which made the bug bounty program valuable when it attracted legitimate researchers.
Since 2019, the program ran through HackerOne and the Internet Bug Bounty, distributing rewards for responsibly disclosed vulnerabilities. Seventy-eight confirmed bugs over six years represents real security value that's now harder to incentivize.
This is the first major infrastructure project to shut down a bug bounty program specifically because of AI-generated noise. It won't be the last. As AI writing tools become more accessible, the economics of bug bounty programs change. The cost of generating plausible-sounding reports approaches zero while the cost of triaging them remains constant.
Other open source projects watching curl's experience may reconsider their own programs. The HackerOne model assumes good-faith participation—an assumption that AI-generated spam invalidates.
Why This Matters
Bug bounties exist because external researchers find vulnerabilities that internal teams miss. When the signal-to-noise ratio degrades enough, the program stops working.
The curl situation highlights a broader tension in security research. AI can assist legitimate researchers in analyzing code and identifying potential issues. But it also enables mass submission of low-quality reports that burden maintainers without contributing security value.
For now, curl returns to GitHub-based reporting. Researchers with genuine findings can still disclose them responsibly. But the financial incentive that attracted some of that attention disappears—and with it, potentially, some of the scrutiny that kept the world's most deployed HTTP library secure.
Related Articles
OpenAI Announces Ads Coming to ChatGPT Free Tier
AI company will begin testing advertisements in ChatGPT for US users in coming weeks, projecting $1 billion in ad revenue by end of 2026.
Jan 18, 2026Cisco DevNet Spotlights Partner-Built Security Integrations
January's Month of Partner Innovation showcases PagerDuty alerting, Meraki backup tools, and cloud migration capabilities built on Cisco APIs.
Jan 30, 2026Data Privacy Week 2026 Kicks Off With AI and Children's Privacy Focus
The NCA's annual campaign runs January 26-30 with daily sessions on AI chatbots, dynamic pricing, and the right to be forgotten.
Jan 27, 2026Microsoft Teams Enables Security Defaults on January 12
Tenants using default settings will get automatic protection against weaponizable file types and malicious URLs. Administrators who want to opt out must act before the rollout.
Jan 9, 2026