Curl Developer Considers Ending Bug Bounty Program Due to AI-Generated Spam

Daniel Stenberg, founder and lead developer of Curl, is considering shutting down the project’s bug bounty program after being overwhelmed by AI-generated junk reports. The influx of low-quality submissions—often indistinguishable from spam—has become a serious burden on the small team maintaining one of the internet’s most widely used tools.
Stenberg, who has publicly voiced concerns about AI misuse since early 2024, says the situation has only worsened in 2025.
"The main trend in 2025 is that AI garbage has reached an all-time high—about 20% of all reports,” Stenberg wrote in a recent blog post. “As of early July, only around 5% of reports turned out to be actual vulnerabilities. The validity rate has dropped significantly compared to previous years.”
The reports range from low-effort submissions generated with AI tools to poorly written entries that make it difficult to determine whether they came from a human using AI or were fully AI-generated. Adding to the problem is a new wave of human-generated spam, further muddying the waters.
An Unsustainable Burden for a Small Team
Although Curl receives only about two vulnerability reports per week, the manual review process is labor-intensive. The Curl security team is just seven people, with each report reviewed by three or four members, often taking between 30 minutes and three hours.
"Personally, I already spend an insane amount of time on Curl, and even three wasted hours still leave room for other tasks," Stenberg wrote. “But some of my colleagues only have three hours a week to spare for the project. Not to mention the emotional toll of dealing with this mind-numbing nonsense.”
Last week, Stenberg noted, AI-generated spam reports spiked to eight times the usual volume. He’s now maintaining a list of bogus reports, which has already grown to 22 entries in recent weeks.
AI Spam Hits Open Source Hard
This problem isn’t unique to Curl. In December 2024, Python developer Seth Larson voiced similar concerns, explaining that AI-generated reports often appear credible but collapse under expert scrutiny—wasting valuable time and resources.
In May 2025, Benjamin Piouffle, a software engineer at Open Collective, echoed the frustration, describing how his team is facing the same wave of AI-driven bug report spam.
"We may eventually have to switch to a platform like HackerOne and restrict submissions to verified researchers,” Piouffle said. “Right now, we handle everything manually. But doing so could make it harder for new researchers to break in.”
Is It Time to End the Bounty?
Since launching the program in 2019, Curl has paid out over $90,000 for 81 verified vulnerabilities. The program is managed via HackerOne, which currently requires submitters to disclose AI use—though AI-generated reports are not banned outright.
Stenberg says he will spend the rest of the year evaluating potential fixes, including:
- Charging a fee for submitting reports
- Banning the use of generative AI
- Scrapping monetary rewards altogether
Still, he admits that even removing the incentive may not stop the flood of junk reports.
“Many of these submitters seem genuinely misled by AI marketing and believe they’re helping,” Stenberg wrote. “So, removing money from the equation won’t completely solve the problem.”
Key Takeaways:
- Curl’s bug bounty program is under threat due to a surge in AI-generated and low-quality reports
- Only 5% of 2025 submissions were valid—down sharply from previous years
- The Curl team spends hours reviewing junk reports, straining limited resources
- Possible remedies include charging for submissions, restricting AI use, or ending the bounty program
- Other open-source projects like Python and Open Collective are facing the same challenges