Have you heard about the teenage hacker who became a millionaire through bug bounties? Or did you hear that Google has paid white hat hackers more than $15 million since 2010 for finding security vulnerabilities? Or maybe you’ve read that independent security researchers discovered more than a hundred thousand security bugs in 2018 alone.
Thanks to relentless media reporting of these stories, there’s a certain romance in bug bounties – the rags to riches stories, the struggles and triumphs of finding that elusive bug and the dramatic narratives of how hackers are saving the world from cyber criminals. It’s no wonder that, according to Google Trends, ‘bug bounty’ is trending upwards over the last five years.
As a result, companies seem more likely than ever before to enlist the help of the white hat community through bug bounty programs. This is especially true in the aftermath of a large security breach, when they need to show that they are completely committed to cybersecurity.
From the perspective of many in the security community, such a move probably makes a lot of sense. If you are a cybersecurity pro with some technical chops, you can’t help but think you should get in the game as a bug bounty hunter. Or perhaps you have the unenviable responsibility of keeping your organization cyber secure and think you should at least consider bug bounties as a viable option for your vulnerability management program.
One thing is for certain: you can no longer ignore crowdsourced security testing models for continuous vulnerability discovery. Today’s continuous code deployment demands continuous vulnerability discovery and remediation. The expanding technology environment requires an expanding scope of security testing. The increasing complexity of vulnerabilities means increasing depth of testing. We need to increase the frequency, breadth and depth of testing if we hope to lessen the Attacker’s Advantage.
Before you continue reading, how about a follow on LinkedIn?
Beyond the challenges around risk and uncertainty, can such crowdsourced models really deliver on their promise? Let’s take a look at some of the challenges that you may not be aware of.
Build it and they will come?
Remember the dot.com days when everyone was rushing to create a web business? ‘Build it and they will come’ was the mantra back then and we all know how that turned out.
Similarly, engaging the security researcher community and getting them to take an interest in your bug bounty program may not be that simple. Based on an analysis of publicly available data from a large crowdsourced security testing platform, it quickly becomes clear that bug bounty programs don’t always generate a lot of attention:
– 49% of all programs received less than 10 reports in the last 6 months;
– 18% of ‘new’ programs (active < 1 year) received zero reports; and
– 38% of ‘new’ programs (active < 1 year) received less than 10 reports.
This could mean two things: Either these systems are very secure or there is an uneven distribution of crowdsourced resources and these programs are not getting enough attention. In other words, white hat hackers might be only focusing on the biggest, most popular bug bounty programs and ignoring everything else.
You must also consider the dynamics of such crowdsourced security testing models. Bug bounty hunters are in a race to be first since they are awarded based on non-duplicate valid vulnerabilities. This means that they will work as hard and as much as the anticipated rewards (both financial and otherwise), and may adopt participation strategies that tries to maximize returns for minimal effort. If their effort is not yielding maximum payouts, they may simply move on to other programs.
How big (or good) is the crowd … really?
Some companies that adopt bug bounty models believe they are scaling their red team to thousands of pros that can hunt down their security bugs. It’s a romantic thought but is it fact?
We often hear about large crowdsourced security testing platforms with tens or hundreds of thousands of registered ethical hackers (or security researchers). But if you read closely the published reports from these same platforms, only around 5-10% of security researchers have received rewards.
For example, take a look at the Google Vulnerability Reward Program, which paid out $3.4 million in 2018, and you will notice that only 317 security researchers were paid.
So it seems, the ‘crowd’ may not be as large as some people think. The “real” crowd of active participants that are actually contributing may actually be quite small, even if there are a lot of “registered users” on a platform. It could be the case, too, that some participants are just trying their luck by spamming bug bounty programs with invalid bugs.
Not the most efficient model
While crowdsourced security testing may deliver useful and sometimes surprising results, it is not the most efficient use of scarce cybersecurity resources.
Facebook awarded $1.1 million to security researchers in 2018. But only 700 reported vulnerabilities were awarded out of 17,800 reports. This means just 4% of reports were considered valid.
Considering that 17,000 reports were rejected, how much waste are we looking at? Let’s suppose a security researcher spends 2 hours finding and reporting a bug and Facebook takes 30 minutes to triage (verify the validity of a bug). Taking the median salary for information security analysts of $47.28 per hour based on data from the U.S. Bureau of Labor Statistics, we are looking at $2 million worth of man-hours wasted – double that of the awards paid out. And remember, this does not even include the time spent by security researchers that did not result in a report submission.
While security researchers do not get paid if they do not produce results, someone is still paying for resources to perform triage. If you are running your own bug bounty program, that someone is you paying a security team to handle the incoming bug reports. Even if you are using a managed bug bounty service, that someone is still you, paying the crowdsourced security testing platform.
You’ve probably heard of the Law of Diminishing Returns, which states that in all productive processes, adding more of one factor of production, while holding all others constant, will at some point yield lower incremental per-unit returns. In simple terms, it means doing more gives you “less bang for the buck.”
The Facebook example clearly shows that the program is way past the point of diminishing returns. There may be a lot of reasons why this approach may not be a bad thing when it comes to vulnerability discovery – you may just get that ‘black swan’ vulnerability that could have brought down you entire organization. It’s just not optimal.
So, what’s a better way?
We know that relying on vulnerability scanning using tools is only good for a baseline analysis of potential vulnerabilities. Manual penetration testing gets us closer to mimicking the techniques of cyber attackers but is difficult to achieve on a continuous basis. Going beyond the usual approach and adopting a crowdsourced model can provide an additional proactive layer of continuous vulnerability discovery. However, based on the above, this may no longer be a viable option.
Large crowdsourced #security testing platforms have hundreds of thousands of registered ethical #hackers but only 5-10% received rewards from bug bounties. #respectdata Click to Tweet
What’s important is designing the right model to increase efficiency and avoid diminishing marginal returns. One way to do this is by limiting participation to an optimal number of skilled security researchers, getting greater visibility into the type of tests being performed and security issues already known. At a time when hackers are becoming more aggressive than ever on a worldwide basis, such a solution might be the only way to protect organizations from future security breaches.