
HackerOne Pays Out $81 Million in Bug Bounties Over the Past Year: AI Driven Findings, Program Trends, and What It Means for Security
A detailed look at HackerOne's reported 81 million in bug bounty rewards over the past year, highlighting program scale, AI driven vulnerability discovery, and practical guidance for building effective disclosure programs.
HackerOne's Year in Review: 81 Million in Bug Bounties and the AI Security Shift
In a landscape shaped by rapid advances in artificial intelligence and an ever expanding attack surface, the bug bounty ecosystem has shown impressive momentum. A leading platform disclosed that researchers around the world earned a total of 81 million in rewards for responsibly disclosing vulnerabilities over the past year. This milestone reflects a 13 percent year over year increase, signaling growing confidence in crowd sourced security and the value of coordinated vulnerability disclosure programs.

What makes this figure particularly notable?
What makes this figure particularly notable is the breadth of programs and participants involved. The platform currently hosts more than 1,950 bug bounty programs, delivering vulnerability disclosure, penetration testing, and code security services to a wide array of organizations. Among the customers are recognizable names spanning tech, automotive, finance, and government sectors, illustrating how broad the demand for proactive security has become. The ecosystem is not only about big payouts; it rests on scalable processes that translate research into safer products and services for millions of users.
On average, individual active programs pay out roughly 42 thousand dollars per year. While the typical program is modest, the top tier tells a different story: the 100 largest programs alone accounted for about 51 million in rewards during a 12 month window beginning in mid 2024 and ending mid 2025. That concentration reveals how major organizations with clear, well defined scopes can unlock substantial security value by inviting researchers to audit their products and services at scale.
Beyond raw payout totals, the year shows how the makeup of risk reporting is evolving. The top 10 programs together contributed around 21.6 million, underscoring that a relatively small set of high value programs drive a large share of activity. Meanwhile the top 100 all time earners have accumulated roughly 31.8 million in earnings, highlighting the lucrative potential for researchers who build a consistent track record across multiple programs.
Another striking trend is the rapid rise of AI in vulnerability discovery. Researchers increasingly use AI tools to assist in reconnaissance, fuzzing, and triage, with AI related vulnerabilities rising more than 200 percent this year. Prompt injection vulnerabilities, a form of AI specific risk, surged by over five hundred percent, earning attention as the fastest growing category in AI security. This shift mirrors broader industry efforts to secure AI powered systems, including chatbots, copilots, and autonomous security agents.
In total, AI figured prominently in program scope in 2025. About 1,121 bug bounty programs explicitly included AI in their scope, a 270 percent year over year increase. Autonomous AI powered agents submitted more than 560 valid vulnerability reports, illustrating how AI augmentation is transforming how researchers hunt for flaws. According to researchers surveyed by the platform, about 70 percent have used AI tools in their workflow to enhance their hunting capabilities over the last year. This confluence of human expertise and AI assistance marks a new era for vulnerability discovery and program management.
HackerOne leadership framed this shift as the emergence of what they call bionic hackers a new generation of researchers who combine human intuition with AI to uncover issues at an unprecedented scale. The implications are clear for security teams: AI can accelerate discovery and improve coverage, but it also requires corresponding upgrades to triage, validation, and remediation processes to maintain trust and speed. As AI becomes a more integral part of security programs, organizations should invest in robust disclosure workflows, secure data handling, and transparent reward structures to keep researchers motivated while protecting customers.
For organizations running bug bounty programs, several practical takeaways emerge. Start with a clear and well scoped program that outlines which systems are in scope and which are out of scope. Ensure that you have a fast and fair triage process so researchers see timely validation on their submissions. Align rewards with impact so critical issues receive meaningful recognition. Embrace AI thoughtfully by providing guidelines on acceptable AI assistance and by auditing AI generated reports for accuracy. Finally, treat vulnerability disclosure as an ongoing partnership with the research community rather than a one off transaction. This mindset helps build a sustainable program that continuously improves product security while maintaining researcher engagement.
From a broader security strategy perspective, the year highlights the growing importance of vulnerability disclosure as a core capability. Organizations that embrace outside expertise early in the development lifecycle can catch issues before they become exploited threats. The competitive advantage lies not just in the payout numbers but in the speed and quality of remediation, the governance around triage, and the ability to attract and retain skilled researchers who can keep pace with evolving attack techniques.
In the coming year, expect AI to continue shaping how researchers hunt for vulnerabilities. Expect the list of in scope AI categories to expand and for novel disclosure workflows to mature. For security leaders, the practical path forward is clear: design inclusive, scalable bug bounty programs; invest in AI responsibly; and strengthen your vulnerability management lifecycle so that every discovered issue translates into tangible security improvements.
To make the most of these developments, organizations should pair bug bounty programs with proactive security measures such as automated scanning, secure coding training, and a transparent vulnerability disclosure policy. By treating bug bounties as a strategic capability rather than a side project, teams can improve risk posture, accelerate remediation, and foster a vibrant community of researchers who help protect users and customers around the world.
