Every business needs to have a process in place for handling security vulnerability reports, but some organizations take a much more proactive approach to dealing with security researchers.An increasing number of hardware and software vendors have formal bug bounty programs. Google, for example, runs its own vulnerability rewards program, and Microsoft has multiple bug bounties covering Office 365, Azure, .NET and Edge as well as general programs covering exploits and defenses.And the U.S. Department of Defense (DoD) set up its first bug bounty after several years of watching the software industry, says Katie Moussouris, now CEO of Luta Security. She previously created similar programs for Microsoft and Symantec, worked with the FDA to create market guidance around vulnerability disclosure for medical devices and helped the DoD prepare for their bug bounty while working at HackerOne. \u201cThe DoD was curious about those programs were effective, whether the folks participating in it were acting in good faith,\u201d she tells CIO. \u201cThey wanted to take what was working in the private sector and fast track that into the DoD.\u201d\u201cBug bounties are really just a subset of vulnerability disclosure with a particular incentive. They can be a useful tool. Just like any other incentive program, you're trying to incent certain types of behavior, certain types of bugs,\u201d Moussouris says.Is a bug bounty for you?If you\u2019re a business that just uses IT, would a bug bounty be useful to you?Maybe, \u201cif you develop your own software or you rely on software and web applications to collect and handle sensitive data,\u201d says Dwayne Melancon, vice president for products at security software company Tripwire. These days, that includes an increasingly wide range of businesses.\u201cIf you operate a service that relies on software and handles data, it is a good idea to engage with an external penetration tester to help you test the whole system and to \u2018think like an attacker\u2019 to exercise the system and evaluate its overall strength,\u201d Melancon says.Companies like HackerOne, BugWolf, SynAck and Bugcrowd offer platforms for both bug bounties and crowdsourced penetration testing.\u201cOffering a monetary reward (a bounty) is definitely not required nor expected,\u201d HackerOne co-founder, Michiel Prins says (the DoD runs its bug bounty through the HackerOne service). \u201cBut, it allows you to show gratitude beyond the words \u2018thank you\u2019, creates loyalty and actively incentivizes the researcher to report vulnerabilities to you again next time they stumble upon something.\u201dCasey Ellis, CEO of Bugcrowd, agrees that this starts with your own security culture. "Before anything else, it\u2019s important to recognize that vulnerabilities are inevitable. By starting from the assumption that you are vulnerable, you can work towards remediating these vulnerabilities before the bad guys find them. Crowdsourced testing is an effective and efficient way to do this. There are a few simple steps you can take to minimize risk both from vulnerabilities and from unauthorized public disclosures.\u201d\u201cThe goal is to set expectations to promote positive communication and coordination both with internal stakeholders and external researchers, and create a security feedback loop that makes you smarter and more resilient over time. The first step to achieving this is to determine a clearly defined scope. From there, clearly communicating this scope and keeping an open dialogue with researchers is key.\u201dEllis identifies a number of key steps, which are an extension of the way you should be handling less coordinated reports of problems; again, this goes far beyond the IT team. \u201cDevelop your vulnerability disclosure policy. Develop a process for handling bug reports. Develop templates with corporate communications and legal for communicating with security researchers, press and development teams. Integrate systems with internal development ticketing software. Decide on the range of rewards for vulnerabilities for incentivized programs.\u201d And because a key part of handling vulnerabilities responsibly is the right level of transparency, \u201cdecide on coordinated disclosure policies.\u201dYou also need to be ready to handle reports quickly, says Prins. That includes having an on-going conversation with researchers. \u201cCompanies that run top vulnerability disclosure programs strive to acknowledge receipt of a report within 24 hours. They are also quick to validate (or invalidate) submissions to their program, usually getting to a valid\/invalid decision within 48 hours of receipt.\u201dBut it also means dealing with the issues once you\u2019ve validated that they\u2019re a real risk. \u201cIn terms of escalation, you are going to need a process for the \u2018drop everything and work on this now\u2019 type vulnerability.\u201d\u201cService providers help you do the front end \u2014 the interaction with the hackers, the triage \u2014 but they won\u2019t help you prioritize fixing those bugs according to your own business needs,\u201d Moussouris warns. \u201cThey won\u2019t help you figure out how many resources you need to devote to engineering so you can fix bugs while you're still working on general operations or development.\u201dA significant part of the preparation for the DoD bug bounty was \u201cmaking sure the back-end engineering teams were ready to receive those bugs\u201d because the first reports arrived within minutes of the program going live.\u00a0Prins also recommends taking advantage of the people you already pay to provide hardware, software and services, because your problems might turn out to be something they have to fix. \u201cBuild strong relationships with the various suppliers of your IT solutions. The day an external party tells you about a problem with your IT, it is easy to leverage that existing relationship and get the vulnerability fixed.\u201d\u201cWhen you are buying new IT and evaluating different vendors, make sure to inquire whether they have a way for receiving vulnerability reports from third parties. Learn about how that process works. Personally, I would prefer a vendor that has a vulnerability disclosure program or even bug bounty program over a vendor that doesn\u2019t have an established process,\u201d Prins says.Grey areas and grey hatsYou don\u2019t have to go as far as running a bounty program to find the crowdsourced bug bounty and penetration testing services useful. Simply setting up an account for your organization gives researchers a way to contact you when they find issues, notes Troy Hunt, the security consultant who runs the haveibeenpwned.com website to help track data breaches that compromise user account information.\u201cThere\u2019s a maturity matrix. Over time, companies can evolve from having no information at all, to offering contact info, to having a HackerOne or Bugcrowd account. Even without necessarily incentivizing researchers, you can say \u2018here are the kind of vulnerabilities we will accept and process\u2019.\u201dThat will help you avoid grey areas \u2014 like when a security researcher who finds publicly accessible private information and downloads it. Are they being thorough or going too far? Clear guidance on that was one of the key points in the DoD bug bounty, Hunt says. \u201cSo many people ask me about \u2018where should I stop?\u2019 when they find a vulnerability.\u201dUsing a third-party platform forces you to define clearly what\u2019s acceptable in advance, rather than slowing down your security response by working out the legal and communications implications as you go.\u201cUnderstand that more than just the security and engineering or development teams may have to get involved. It\u2019s not uncommon for legal, compliance, PR, and other such teams to play a vital role \u2014 even in the most robust vulnerability disclosure and bug bounty programs,\u201d says Nick Harrahill, senior product manager for security company Synack, who also worked on these programs at eBay and PayPal. \u201cIt\u2019s pivotal to set clear rules and boundaries for what\u2019s allowed and what isn\u2019t. Additionally, the organization must ensure that the process has legal protections and be prepared to act upon them if breached.\u201d\u201cHopefully, they\u2019re ethical individuals simply trying to help, but they can also be individuals with malicious intent attempting to profit from bugs they have found, through extortion (or \u2018bug poaching') or through other malicious means.\u201dYou need a mature security culture to be ready for bug bounties, Harrahill believes and the key aspect is that \u201ceveryone has bought in; they know that this is priority and the researchers [are] vital to security of the company. As much value as it provides, it can also turn out ugly if you don't give it the proper attention.\u201d\u00a0There\u2019s a clear line between ethical security researchers trying to help you (and often wanting nothing in return beyond recognition and seeing the flaw fixed) and criminals trying to hold you ransom over a bug they\u2019ve found. But, Hunt warns, \u201cransomware and the awareness of it are making some companies more suspicious of the motives of genuine researchers, because we\u2019re getting used to hearing about people wanting money.\u201dThese services don\u2019t just make it easier to manage crowdsourced security testing, but, as Ellis notes, you\u2019ll be getting reports from \u201ca curated crowd of researchers.\u201dHunt puts it a little more bluntly: \u201cThese programs can sort the wheat from the chaff for you and they know how to handle them.\u201d Especially younger researchers or those from other countries who lack experience in dealing with businesses, he suggests, \u201chave different social norms, they use very different language and have very different expectations. The way they conduct themselves professionally could easily put them in a position of seeming to be malicious when they don\u2019t mean to be. It\u2019s a difficult negotiation that needs to be managed.\u201dWhen you get that right, it has big benefits, says Moussouris. \u201cThe people who are turning over things for bug bounty programs are ideally people you want to have come back, who you want to have get to know your product or site well, and build good relationships with so they can help you improve.\u201dAre you ready for a bug bounty?Because she\u2019s so well known for setting up the Microsoft and Symantec bug bounties, Moussouris says people assume she\u2019ll always recommend them. \u201cI\u2019m Captain Bug Bounty; but I\u2019m also Captain Don\u2019t Hurt Yourself. A bug bounty is not necessarily appropriate for everybody.\u201dIf the thought of inviting even benevolent hackers to check out your security has you breaking out in a cold sweat because you already know they\u2019d find so many problems that you\u2019d be overwhelmed by the reports, you\u2019re not ready for a bug bounty.You need to start fixing the problems you know about, and get your own patching and testing protocols up to standard first. That might require outside help, in the form of a security consultant, or it might be time to hire a CSO. You need to get a culture for security internally, including getting security involved in development and ops, and training non-technical staff to spot and report phishing attempts.\u201cIf you have that first step of a vulnerability disclosure program, you may be ready to refine that and harness the power of hackers who are already willing to report to you, and point them in the direction you choose and are interested in and are willing offer a reward for. In that situation, it\u2019s a good way to focus,\u201d says Moussouris.\u201cBut if you've never done vulnerability disclosure before, you have no real way to communicate with partners, with affected customers or with the media, let alone with hackers. If you\u2019ve never done this before, and you lack the underlying support structures to be able to do regular vulnerability disclosure, a bug bounty would be a really harsh first step. It will not work well for you, if you lack the appropriate ability to triage and fix those vulnerabilities.\u201dMoussouris suggests two measures to tell if you\u2019re ready for a bug bounty: the volume of bugs you\u2019re used to handling and the velocity at which you can fix them.\u201cWhen an organization says they\u2019re ready for a bug bounty, the first question I ask is \u2018how many bugs do you receive on a monthly basis\u2019 and if the answer is very few or \u2018we don\u2019t have a bug program,\u2019 then I walk them back and tell them they need to start this in the appropriate way.\u201dMoussouris cautions against viewing a bug bounty as a non-disclosure agreement you\u2019re paying for to buy more time to fix a known bug. \u201cThat can work in some circumstances, but if the expectation is that you can pay a very small amount to have unlimited time to fix the bug, that will not meet the expectation and norms of the community. That's not why they\u2019re turning vulnerabilities over for those minimal bug bounty fees.\u201dShe also warns against thinking of a bug bounty as \u2018security QA\u2019. \u201cSome people are thinking, \u2018this is cheaper than a penetration test; I\u2019ll just move my budget.\u2019 That is absolutely the wrong approach; that approach will not get you the results you\u2019re hoping for.\u201dInstead, she suggests viewing vulnerability disclosure and bug bounties as part of your secure development and deployment lifecycle. \u201cYou\u2019re trying to build and deploy the most secure systems possible and you have a plan, post release, for servicing bugs. That\u2019s where your vulnerability disclosure program goes. If you\u2019re at the mature point where you're cycling those bugs back and learning from them, so you\u2019re creating more secure code or changing deployment practices; if you\u2019re doing that, then you might consider a bug bounty program.\u201dYou can also use bug bounties strategically, for example to get the systems of a new acquisition up to the same level as the rest of your business, once you\u2019ve worked through your own integration processes. \u201cUse a bug bounty for a specific period of time to shake out as many more bugs as possible,\u201d Moussouris advises.When Microsoft introduced its first bug bounty for Internet Explorer 10, it was because it was having difficulty getting researchers to report bugs during beta testing (because the only reward offered was being named in a security bulletin and bugs fixed during the beta cycle didn\u2019t usually get announced in a security bulletin, that inadvertently gave researchers an incentive to wait and disclose bugs later). The bug bounty program offered recognition and a small cash reward and it was targeted, explains Moussouris. \u201cIt was a huge efficiency win for the engineers because they were all working on the exact version of the browser they were getting bugs for.\u201dNot only did they get relevant bugs, they also got reports that revealed underlying problems they were able to find and fix as well. \u201cThat\u2019s what you want to use a bug bounty for,\u201d emphasizes Moussouris. \u201cYou don\u2019t use it to replace any parts of your existing security efforts; you use it to fine tune and hone the process and get the kinds of bugs you want in areas you want.\u201dMoussouris urges every business to, at the very least, set up a way for security researchers to tell you about problems. \u201cHardly anyone is even doing vulnerability disclosure. If you have customers, you\u2019re going to set up a customer response system. If you have code, why not set up a vulnerability disclosure system?\u201d After all, the researchers who want to tell you about your vulnerabilities won\u2019t be the only people who can find them.