Running a bug bounty program takes serious effort - Embedded.com

Running a bug bounty program takes serious effort

Running a bug bounty program requires sufficient preparation to avoid missteps that can harm an organization’s reputation.

We’ve discussed the rapid growth of vulnerabilities in Internet of Things and industrial control systems that cause potential security holes. A growing proportion of those vulnerabilities are severe, and an increasing number are remotely exploitable with little effort.

But how are those vulnerabilities discovered in the first place?

While vulnerability disclosure programs (VDPs) provide a method for identifying and mitigating these flaws, those who usually find them are external ethical hackers and security researchers. Still, they don’t always get paid for their findings. While some may do it for fun or from a desire to use their skills to improve an organization’s security, many depend on the income from bug bounty programs (BBPs).

Not all disclosure programs incorporate BBPs or other mechanisms for paying researchers. But according to Bugcrowd’s 2021 Ultimate Guide to Vulnerability Disclosure, 79 percent of organizations with VDPs say they do pay researchers for “impactful findings.”

Growing attack surfaces

Meanwhile, the attack surface continues to grow. The amount of online data is now 50 times what it was in 2016, with nearly a third residing in unmonitored assets, according to Bugcrowd’s 2021 Ultimate Guide to Bug Bounty.

At the same time, the guide uncovered a huge skills shortage, with 1.8 million unfilled jobs by 2022, an “ineffective security architecture,” and a startling statistic concerning the dark web. That part of the internet isn’t five, 50 or 500 times bigger than the “surface web”—more like 5,000 times bigger, with many “dynamic, motivated adversaries.”


Although web apps are responsible for most vulnerabilities, submissions to all target categories are gaining ground. In the last year, API vulnerabilities doubled and those found in Android targets more than tripled. (Source: Bugcrowd)

Recent individual awards for discovering bugs have been huge. In August, for example, PolyNetwork awarded a white hat hacker $500,000 for exposing flaws in its cryptocurrency platform.

A number that large is more often associated with multiple bugs found by multiple researchers. For instance, Google recently paid a total of $130,000 for 27 bugs found before the release of Chrome 93, the largest being $20,000. Since launching its VDP 10 years ago, Google says it has rewarded over $29 million for more than 11,000 bugs.

Microsoft, with undoubtedly many more bugs to be found, has paid about half that much — $13.6 million — in only a year, with single rewards up to $250,000.

The U.S. government also runs several BBPs, primarily under the aegis of the military. The Defense Department periodically stages short-term BBPs such as “Hack the Air Force” and “ Hack  the Army “. Last year the Defense Advanced Research Projects Agency launched its first BBP, the Finding Exploits to Thwart Tampering program, to stress-test new secure hardware during development.

What successful BBPs require

As Bugcrowd’s guide notes, companies with large security teams sometimes run their own BBPs. Many, however, eventually shift to external BBPs, known as crowd-sourced security platforms. Some are run by third-party providers like Bugcrowd.

“The basic difference between public and private BBPs is that private BBPs typically only allow program managers to privately invite hackers or set up automation to invite them based on metrics, since their BBP is not publicly listed,” John Jackson, founder of ethical hacking group Sakura Samurai, told EE Times.


John Jackson

“NDAs and other requirements are also stricter, with more control over how many people are hacking on the program,” Jackson said. “Private programs are good for starting out. Public programs are good for larger and/or more experienced organizations.”

Jackson, an independent security researcher and expert on BBPs, has authored what’s probably the world’s first textbook on the subject: Corporate Cybersecurity: Identifying Risks and the Bug Bounty Program is scheduled for release in December.

Although a BBP comes with multiple concerns, Jackson’s top three include: “Who’s managing them, lack of communication with hackers, and asset and program scoping.”

A key consideration is selecting a qualified program manager. If that person doesn’t hack, work in the cybersecurity field, or understand all the different kinds of attacks and classes of security issues, they can make “critically wrong decisions” due to a lack of experience and knowledge.

For example, Jackson added, “they might think that a remote code execution vulnerability on a developer’s server is trivial because it’s not the ‘production’ environment.’ But what is it connected to?”

Moreover, unethical hackers “can potentially gather large amounts of sensitive data from the server or from various exploits to escalate the criticality much further — not to mention it’s a server owned by the organization.”

Further, an inexperienced program manager can compound problems if they don’t understand different vulnerability types and attack vectors and why vulnerability details matter. Those not security-oriented “may not understand the criticality of the vulnerability [in a hacker’s report], and may not act on it fast enough, or appropriately,” said Jackson. That could leave companies vulnerable.

It could also lead to unpaid hackers, or hackers not getting paid enough for finding a bug or vulnerability. “Lack of communication can make or break a BBP,” he said. “Program managers have to meet hackers halfway. Communicating with the hacker is literally the purpose of an effective program.”

Over- or under-scoping program assets can also undermine security efforts. Inexperienced managers may apply unnecessary assets in scope, producing too many vulnerabilities. That overload could then overwhelm program engineers, preventing a quick fix.

That misfire is typically solved through a private program. “However, over-scoping can still be an issue, and program managers that don’t prepare through rigorous third-party [penetration tests] and the correct security controls may not be ready to take on the nuances that come with running a program,” said Jackson.

Under-resourced programs are more likely to affect larger organizations. “If an enterprise only puts one web app in scope but has 900 different web apps on 900 different servers, it’s easily defined as just checking a box without forward progress,” he explained. It’s often better to start smaller, adding more assets later, starting with a private program to test the waters.

Under-scoping also affects hackers since it doesn’t accurately reflect the organization’s security program and the logistics don’t add up. “If you have 25 hackers in your program and all 25 are looking at one bug, they’re all competing,” said Jackson. “Even if your BBP is self-hosted, you’ll still get a lot of hackers; it’s bounty hunting, whoever’s first gets the reward.”

Different hackers may report the same bug. Hence, they may not be paid. The slow allocation of assets brought about by under-scoping can generate more duplicate reports.

What’s the bottom line? Running a BBP regardless of type is “not a check-box operation,” Jackson noted. Rather, it requires sufficient preparation to avoid missteps that can harm an organization’s reputation.

>> This article was originally published on our sister site, EE Times.


Related Contents:

For more Embedded, subscribe to Embedded’s weekly email newsletter.

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.