Marketing-driven vulnerability disclosure: It hurts more than it helps

Back on April 1, 2015, the Zero Day Initiative (ZDI) published a blog post offering naming and graphic design services for submitted bugs. This was clearly an April Fools’ joke, but it was also a ha-ha-only-serious jab at something that is becoming more and more common: PR-driven vulnerability disclosure.

Starting with Heartbleed, several bugs have had logos, special websites, and marketing muscle. These include, but are not limited to, Shellshock, Ghost, Backronym, Poodle, DROWN (recently covered on TechBeacon), and most recently Badlock. Of course, naming vulnerabilities isn’t new. Going back to the beginnings of the Internet, problems had names such as Morris worm, smurf, and boink.  What has changed since then isn’t the fact that these things are named; it’s the hype and marketing that accompany many of these "logoed" vulnerabilities.

As it turns out, this isn’t a good thing. Here's why.

Application Security Research Update: The State of App Sec in 2018

Disclosure through public relations

The topic of vulnerability disclosure is usually a touchy one for researchers and vendors alike. Everyone has an opinion of how it should work and when words such as "responsible," "coordinated," and "public" should be used. However, there is almost universal agreement that public disclosure should not occur in the form of a leak from a public relations firm. Unfortunately, that is exactly how the Ghost vulnerability found its way to the public, when a leaked note from a PR firm exposed the bug to the world.

This isn’t to say the people who found these bugs should report them in anonymity. Great research deserves to be recognized and rewarded. And this isn’t to claim that Heartbleed and some of the other bugs are not serious issues—they absolutely are. Instead, let’s focus on whom the logos actually benefit and on what real impact these market-“enhanced” disclosures have on the average person.

Logos spur action

One benefit espoused by those who create logos for vulnerabilities is that they draw attention and awareness to the bug. For Heartbleed, a vast number of applications were affected. Having a graphic to display over the shoulder of a news anchor allows the information to be disseminated further than just social media or blog posts. The wider the information spreads, the more likely it is that C-suite executives will begin asking questions about the bug and how their own enterprise is protected. Instead of system administrators fighting for resources, they find themselves in the unusual position of having executive backing—at least on this particular issue.

In this situation, there’s something to the idea that exposure and attention are benefits in themselves. Corporations have a history of being unwilling to spend money on security—even when it should be obvious. It’s how $80 million can be stolen from a bank and blamed on $10, secondhand switches. You don’t have to be a criminal mastermind to evade what amounts to no practical security whatsoever. In this particular case, the attackers showed they weren’t masterminds, since it was a misspelling of the word “foundation” that led to the hack being exposed. Had they spelled that correctly, they could have made off with $1 billion—again, via an attack that could have been stopped with spending on security. Yet few executives and investors act as though they view security as a sound investment rather than a box to be checked. The first tech IPO of 2016 involved a company specializing in security, and the results were lackluster to say the least.

There is the hope that the proliferation of high-profile security events will increase venture capital to security and research. If it takes a cute logo and a snazzy website to bring actual protection, then let’s make logos. If executives see branded vulnerabilities and then increase security spending, then let’s make logos. If system administrators have free rein to apply the patches needed to stop high-profile attacks, then let’s make logos. However, if those things aren’t happening, we need to take a closer look beyond the logo to find the real impact.

Logos increase risk

There is an argument to be made that attackers specifically target vulnerabilities that have logos. While not wrong, the full truth is that attackers target whatever bug works best, regardless of marketing. Typically, vulnerabilities with logos get a fair amount of analysis and write-ups. This analysis increases the understanding of the bug, but it also lowers the cost to attackers. They no longer have to fully research a bug themselves—marketing has done it for them already.

While that's part of the problem, the real issue is that many users do not apply the patch once it becomes available. The patch itself gets no press, so many don’t even know it is available. In some cases, the vendor producing the patch explicitly does not attempt to get widespread coverage of patch availability. On the surface, this makes some sense. It can be embarrassing to admit you had a problem that required a patch to fix. However, vendors should be applauded for creating and distributing fixes—especially critical fixes—in a timely manner with high levels of quality. Instead, we are left in situations where even after a year, many devices are still vulnerable to Heartbleed.

Logos benefit companies, not researchers or users

If we don’t protect systems and devices from big, bad attacks, what’s the point of the logo? Therein lies the true problem with logo-branded vulnerabilities: The logos don’t actually help anyone. Well, they do help the company that releases the information. Who are the researchers behind Ghost? We know they worked for Qualys, whose stock went up considerably after the release of information about Ghost. There’s no clear causation there, but it is an interesting correlation.

The most recent kerfuffle involving logos centered on a Samba bug dubbed “Badlock.” A spiffy logo and accompanying website were released three weeks prior to the patch being made available from Microsoft. Speculation about the actual bug began immediately, even as those behind the bug admitted the pre-patch was good for business. Criticism increased once the patch was released and the details of the bug showed maybe it wasn’t as earth-shattering as we had been led to believe—especially when it seemed the person who discovered the bug was involved in writing the original code. The community responded by setting up a parody website and making #Sadlock a trending topic.

The entire Badlock ordeal seemed to coalesce all of the critics’ complaints regarding marketing vulnerabilities. In this instance, it became very clear that those involved in the logo and disclosure were more interested in press than in patches. They certainly received a fair amount of press. What isn’t clear is whether people will actually install a patch that corrects a legitimate—if overhyped—bug.

What needs to change

While there is no shortage of criticism for PR-driven vulnerability disclosure, there aren’t many people advocating a better way. Here are three practical things that can be done to quell the marketing hype and actually make the world a safer place for computing.

1. Start doing outreach and marketing patches, not merely bugs. Mountains of data and studies show that many breaches could have been prevented by installing the relevant free security patch from the vendor. There are many reasons why patches may not be installed, but one factor is that vendors do not widely publicize their availability. There seems to be a stigma around producing patches. Vendors do not like to admit they released software that requires a security update. This is odd, since there doesn’t seem to be any software ever released that doesn’t require some form of update. Instead of not being talked about, patches should be treated as a normal part of having a computer. Publicizing the patch will encourage people to install it. Publicizing them consistently will make people understand, for better or worse, that patching is normal.

2. Start spending on security. Just as cars and homes need regular maintenance, your enterprise needs care and feeding. It does no one any good to save a few dollars on switches if it results in an $80 million loss. Security should be viewed as an investment rather than a tax. This should be common sense. Ben Franklin stated, “An ounce of prevention is worth a pound of cure.” The same can be said for security. Every dollar spent potentially saves thousands more. At some point, those who hold the purse strings must understand that cutting security spending will cost you in the end.

3. Stop marketing the worst-case scenario. By putting a logo on everything and marketing as though each bug brings on the end times, we become desensitized to real problems. It also makes it harder to distinguish between legitimate, scary bugs and bugs that maybe aren’t. To paraphrase dialog from The Incredibles, if everything is super-mega-critical, then nothing is. Instead of focusing on the worst case, shift the focus to the most likely. That allows everyone to build a case based on reason and prioritize their actions so that they will have the biggest impact.

As with #Sadlock, the community usually decides which vulnerabilities should receive extra attention. Sometimes, as with Heartbleed, they have logos and special websites. Sometimes, as with Conficker, there is no logo—just a bunch of people calling for immediate patching. Then again, Conficker is still being exploited seven years later. Maybe the world would be different if it had a logo.

Image credit: Flickr

Topics: Security