Micro Focus is now part of OpenText. Learn more >

You are here

You are here

How to boost your software security with a clear vulnerability disclosure policy

public://pictures/Robert-Lemos-Technology-Journalist-Lemos-Associates.jpg
Rob Lemos Writer and analyst
 

In June 2005, network security specialist Eric McCarty found a serious vulnerability in the website of the University of Southern California that exposed online applicants' personal information. As proof, he downloaded seven records from the site. During the following 18 months, federal officials raided McCarty's home, charged him with criminal hacking, and sentenced him to six months of home detention, thirty months of probation, and nearly $37,000 in fines.

While organizations are far less likely to take such drastic action against vulnerability discoverers today, risks remain for anyone researching software flaws. In 2015, for example, security firm FireEye issued an injunction against a research firm that planned to detail flaws at a security conference. And last year, a digital marketing firm —and alleged spam operator—sued security researcher Chris Vickery for finding a database of 1.4 billion email accounts that the company had left unprotected.

"If researchers are not careful, they can face these sorts of threats still, and not just from corporations," said Dustin Childs, communications manager for Trend Micro's Zero Day Initiative. "Thanks to government regulations like the Wassenaar Arrangement, researchers sometimes also need extra paperwork if they want to report bugs."

Businesses looking to engage with flaw finders have to fight a history that has made researchers wary of dealing with any company, educational institution, or government agency. While the introduction of bug-bounty programs over the past few years—especially longtime holdout Microsoft's rollout of a program in 2013—has eased tensions, companies need to do as much as possible to assuage fears and provide guidance to researchers.

The first step toward doing that is to create and publicize a vulnerability disclosure policy. A good VDP provides researchers with a pledge to deal with them in a fair manner and not take legal action, a process through which researchers can submit issues and get updates, and a description of the type of flaws a company would evaluate.

"The reality is, vulnerabilities are found every day by security researchers, friendly hackers, customers, academics, journalists, and tech hobbyists," stated bug-bounty program management company HackerOne in a post on VDPs. "Because no system is entirely free of security issues, it's important to provide an obvious way for external parties to report vulnerabilities."

Here are five steps to establish an easily understandable policy and process of vulnerability disclosure.

1. Welcome researchers and promise not to sue

In March, cloud-storage provider Dropbox revisited its VDP with an eye toward protecting security researchers who otherwise might be reluctant to contact the company about issues. Pointing to recent incidents, the company updated its policy.

Chris Evans, head of security for the company, stated in a blog post:

"Anything that stifles open security research is problematic because many of the advances in security that we all enjoy come from the wonderful combined efforts of the security research community. Motivated by recent events and discussions, we’ve realized that too few companies formally commit to avoiding many of the above behaviors."

Companies should commit to classifying good-faith research as "authorized" under the Computer Fraud and Abuse Act (CFAA), not bringing suit under the Digital Millennium Copyright Act (DMCA), and acting as allies against third-party claims, Evans said.

2. Create a clear process and scope of research

Companies should also create a clearly defined process that vulnerability researchers should follow, both in conducting research and in reporting security issues. The first consideration should be to limit the scope of what researchers can consider fair game, according to HackerOne.

"Limitations may be put on which product or software versions are fair game, since older versions may be beyond their lifecycle or no longer supported," the company stated. "Organizations may also choose to keep certain systems or products off limits to protect customer data or intellectual property."

The company considers limiting the scope of research as Step 2 in its process of creating a vulnerability disclosure program, after a clearly stated promise and before offering researchers "safe harbor." The US Department of Justice's Computer Crime and Intellectual Property Section has created its own rubric of considerations for organizations, placing designing the programs and their scope as the first step in the process.

3. Clearly define expectations

The company should also clearly state its expectations of the vulnerability research process, Dropbox stated. This includes that researchers grant them adequate time to fix the issues, that they not demand bounties, and that they properly handle any data that is not their own.

Companies should also describe their ideal vulnerability report, what information the document should contain, and their general timelines for responding to the researcher and testing any vulnerabilities.

"Remember that finders are unpaid, so the request for information is just that: a request," HackerOne states. "Keep in mind, requiring too much information may result in less submissions."

4. Don't muzzle researchers

Companies should not expect researchers to stay mum about their findings, especially if the business is not paying a bounty. For many researchers, publicly disclosing and getting credit for finding a security issue is the true reward.

While companies may own the software that had the security issue, they do not own the information about the vulnerability. This is a key consideration for Dropbox.

"We don’t gate researchers who wish to publish vulnerability details. Using policy or bug bounty payments to muzzle or curate scientific publication would be wrong."
—Chris Evans

5. Seek out resources and example VDPs

Besides the Dropbox and HackerOne discussions of VDPs, Carnegie Mellon University's Software Engineering Institute has a 121-page document, The CERT Guide to Coordinated Vulnerability Disclosure.

And the US Department of Justice's Computer Crime and Intellectual Property Section has an 8-page document, A Framework for a Vulnerability Disclosure Program for Online Systems. Finally, the International Organization for Standardization's ISO/IEC 29147, Information technology — Security techniques — Vulnerability disclosure, is available for free online.

Get crystal clear on software security

Companies that are not prepared for vulnerability reports will often handle any disclosure poorly, in a process reminiscent of the five stages of grief, according to the Carnegie Mellon guide, published in August 2017.

"We often find that vendors of software-centric products are not prepared to receive and handle vulnerability reports from outside parties, such as the security research community," stated CMU researchers in the report. "Many also lack the ability to perform their own vulnerability discovery within their development lifecycles."

Companies that create vulnerability disclosure programs will place themselves on a track toward a better relationship with security researchers. As they iteratively improve their process and communications, their security will only improve, said Trend Micro's Childs.

"Our experience shows that the more we work with a vendor, the better they become at responding to bug reports."
—Dustin Childs

Keep learning

Read more articles about: SecurityInformation Security