There are countless resources out there on participating in Bug Bounty programs from a researcher’s perspective. However, there are a surprisingly few number of resources on how to launch and maintain a program in an enterprise ecosystem. Whether you’ve just inherited a program at your organization, you’re growing one, or just starting one there are some key steps to take that can expedite that return on investment (ROI) and prove value to company leadership.
Below I’ve outlined some pitfalls to avoid, and steps that can be taken to ensure the success of the program.
Legal Considerations
Consult legal. On everything. You want to involve legal as early as possible in the development of the program because there is no quicker way to lose leadership’s faith in the program than by introducing legal risk to the company. Remember, your legal department is in the same business as you: managing and mitigating risk within the company. Input from legal can make or break the program. If you put an asset into the program prematurely, or one that doesn’t belong, you can unintentionally expose the company to undue risk. For instance:
- A business critical asset with no redundancy that can’t hold up to the volume of researchers’ tests and fails.
- An asset that houses pre-release financial statements.
Adding Assets
- You won’t always get buy-in from system owners right away on adding their assets. In some cases you will need to convince them it’s ok because they will be worried about operational downtime. Remember that not every resource owner will be technical so it’s easy for resource owners to assume worst-case-scenario. As such, you will need to spend a little time teaching and debunking preconceived notions about Bug Bounty programs.
- Resource owners will also want to know how much extra work they will be signing up for by allowing you to put their resources into a bug bounty platform so be prepared to convey organizational value-add. It’s probably true that findings will result in additional tickets that translate to more work for someone else. Therefore it’s crucial to illustrate how “securing X” will be perceived positively from the top.
- You will need to sit down with resource owners and explicitly outline what are known issues and vulnerabilities that should be out of scope ie Out of date protocols, Any known CVEs, Certain APIs. This is important for avoiding too many superfluous payouts. If you are aware of a DMARC/SPF record misconfiguration and a researcher reports it, and it’s not listed as out-of-scope in your brief, it’s already too late- for the sake of the program’s reputation you will probably need to pay it out. So be sure to include these items. Moreover, even when you they are listed, researchers will still fail to read the program brief exhaustively and report out-of-scope findings; at least this way though you are covered as you can refer them to policy when you mark the finding as N/A.
Scaling
- Recommend to your team to start new assets out in a small pool of invite-only researchers and scale gradually from there by slowing adding in more and more researchers. It’s very challenging to gauge the amount of resources that will be required to support the program. Many factors contribute to this such as:
- Volume of findings
- Severity of findings
- Fidelity of the findings
- Consequently, you will want to establish a baseline. Take 3 – 6 months with a few assets, (anywhere from 1- 5 is a good starting point) to understand how much overhead your team experiences.
- In most bug bounty platforms, having multiple programs for a single organization isn’t an additional cost; at least at the time of this article. Therefore don’t be afraid to compartmentalize different assets to different programs. In fact, this is highly a recommended approach as it’s an excellent way to tailor policies for specific assets in your company that have different requirements. This plays a huge role in your tuning phase actually Is covered in the next section.
Tuning
- A public program will generate a lot of noise and false positives (FPs) which will require resources to triage. It may be tempting to drop assets directly into a public program immediately and wait for the Highs and Criticals to roll in but I promise this is not what will happen. While yes, you will probably get P1s and P2s, you aren’t just looking at this as a standalone metric. What you are looking for is the ratio of high-fidelity findings to FPs. If getting your P1s and P2s costs two resources from your team to check into the program and read 15 reports each day for an entire month is the time worth it? The threshold for this is something only you can answer. Consider these two scenarios:
- Let’s say you get 3 critical findings out of 150 reports in a public program. That’s 1 valid finding for every 50 reports.
- Let’s say you get 3 critical findings out of 60 reports in a private program. That’s 1 valid finding for 20 reports.
- The math is trivial here, so the point should be clear that more findings does not equate to more value. Not to mention, the amount of resources at your team’s disposal will be very different. So even though scenario B has a much better ratio of valid findings to reports, scenario A might be well within the operational threshold for a larger organization. Of course that doesn’t necessarily mean they wouldn’t still be interested in trimming the fat to cut resource costs.
Maintaining The Program
Validate all findings against existing ones. (i.e. look for duplicates). It might not be as obvious to individuals haven’t personally managed a bug bounty program, but to those who have, it’s likely that certain patterns have surfaced in during their tenure. Often times a program will go for weeks without a valid findings but then all of a sudden you get one. Ok, not so weird- there are ups and downs in findings. Researchers are people too and they go on vacations, have school, other work, holidays, etc… It’s expected that findings will vary from month to month. What is strange however is when that finding suddenly comes in from three separate researchers just a few days later. Well, here’s the thing: Researchers share their findings in private chat circles to get paid out on the same vulnerability. It’s called double-dipping and it’s more common than you think. Specifically in public programs. While it’s less common in private programs for obvious reasons, it still happens. You’ll see some obscure finding come in one day, and three days later all of a sudden you get the same finding reported by multiple different researchers. And the platform (HackerOne, Bug Crowd, etc…) will overlook these many times. As a program manager you need to be on the lookout for these duplicates and the best way to do so is first, being intimately familiar with the reports received. This is made easier by reducing the extra noise as mentioned in the tuning section above. Once you’ve minimized the amount of noise, it’s easier to cross reference a new ticket with existing tickets. The duplicates won’t have the same title of course but the target endpoint, web path, and likely the payload too, will. These factors can be used as your “unique identifier” so-to-speak.
Optimizing The Policy
- There are grey areas. Sometimes you will get findings which, while directly impact your org, are technically a bug/issue with a vendor or an app developed by a third party. In this case, do you pay out, or pass the researcher along to the vendor? Consider this situation:
- You have a popular Customer Relation Management (CRM) solution on top of which you can build custom business applications. Your company outsources the development of this app to a third-party developer. This app is public facing and in your bug bounty program. A researcher reports that they are able to obtain access to highly sensitive and proprietary information via an exploit in the app resulting in a P1. How do you handle this?
- Well, on the one hand this researcher did just inform you of a critical vulnerability which, if divulged to any malicious party would have made headline news. Also there was no way for the researcher to know this wasn’t an app owned and maintained by your company.
- On the other hand, it’s not a bug in your app. You only own the sub/domain. If you paid out every time for a bug in someone else’s product you’d drain your award pool. It would be like trying to use your car’s AC to cool down the outdoors.
- So what’s the answer here? Should this target never have been placed in-scope? Not quite because remember, your job is to secure the organization- so if even if this had never been added to the scope, it doesn’t mean a malicious actor wouldn’t have stumbled upon the same vulnerability sooner or later.
- The answer is in your policy. You will want to include in your policy what happens in terms of payout in these situations. Do you pay it out because your org was impacted? Or do you transfer them along to the vendor/platform that is actually responsible for the bug? In our experience, what most organizations do is pay the bounty for P1 and P2s in such scenarios because the value of “plugging that hole” outweighs the cost of the payout in those cases. For everything else, refer the researcher along to the party responsible for the vulnerability. If they already have a program then your job is easy. If not, out of respect for the researcher and maintaining a healthy relationship, facilitate an introduction. Remember, you want this researcher to come back and still hunt on your program so if they are respectful, have a decent history on your program, submitted a clean writeup, reciprocate the professionalism.
Payouts
- It’s common to get a finding that is exactly the same but effects QA, Staging, and Prod. If it’s a P1 are you going to pay out 3 times? Probably not, so it’s imperative to include in your policy how these types of scenarios pay out. For instance, define your payouts to be based on the number of distinct fixes required. If the same commit can work against all environments, then pay out once, but only for the highest severity environment. Make sure this is abundantly clear in the policy because researcher will complain when they expect a 3x P1 payout and are told they will only get one.
- The last thing to note relates to the above situation. Researchers often have Medium blogs and Twitter accounts they use to drive traffic towards those blogs. They also take to Twitter a lot when something doesn’t go their way. If they feel slighted or jaded in any way be prepared for outbursts on Twitter. Sometimes they won’t mention the program’s name, other times they will. So you need to be prepared with your legal and comms teams in events where a negative tweet garners too much traction- regardless who was in the right.
We realize this was a wall of text so for anyone who made is this far, congratulations! If your organization is struggling with getting a bug bounty program off the ground, or experiencing trouble scaling it, or conveying value to leadership, feel free to get in touch with us!