Micro Focus is now part of OpenText. Learn more >

You are here

You are here

Is your security testing a DevOps blocker?

public://pictures/james_rabon.png
James Rabon Sr Product Manager, Synopsys
 

Hundreds of Google engineers and developers stopped writing code for a week and instead tackled nearly 4,000 warnings from the company's use of FindBugs, a Java static code checker. While 44% of the triaged issues resulted in a bug report, only 16% were considered important enough in practice to fix, according to an article published last year in the Communications of the ACM.

Called Fixit Week, the exercise, which took place a decade ago, ended up tackling only 42% of the total outstanding defects from the tool—a list totaling 9,473 reports. This serves to highlight that manual triaging of analysis warnings is not a maintainable practice.

For companies adopting agile programming practices, such as DevOps, the lesson is even more significant. Running a static testing tool on a typical software development project can lead to dozens, if not hundreds, of alerts, warnings, and flagged defects. While some development cycles specifically build in testing and remediation steps, the volume of test-suite warnings often becomes a significant blocker for inline security analysis in DevOps.

In Micro Focus Fortify's own dataset, 31% of more than 19 million findings have been suppressed by developers and application security specialists. Of the alerts that companies did not feel merited remediation, about 5% were ignored due to technical considerations.

The remaining 95% were suppressed because the context eliminated risk from the vulnerable code. These were issues that, while technically correct, did not concern the user because the context eliminated the risk.

Is your security testing becoming a DevOps blocker? Here's what your team needs to know to avoid a bumpy road ahead.

Risk and false risk

The 5/95 breakdown of technical false positives versus contextual false positives shows why context is a major factor in triaging vulnerability alerts. Many companies have defaulted to a manual process, dismissing warnings that are not likely sources of risk. By focusing analysis only on the code being checked into the repository, this practice may be maintainable in the short term but will likely buckle under the weight of context-dependent alerts.

In addition, the constant deluge of alerts to be triaged increases the likelihood of a vulnerability or other serious code defect escaping into the public. The approach also runs the risk of important issues being dismissed accidentally or the level of risk being set too high.

The heart of the problem is the false positive—a warning of a vulnerability or a coding error that, for the code in question, is a non-issue.

While many developers believe that a false positive is an error of misidentification, in reality the problem is a lack of context. A flagged issue might affect code that is not used, involve an input that is never exposed, or be too difficult to exploit. Such issues could be critical for one development project but completely benign for a different one.

Customers do not consider a denial-of-service attack for an internally deployed meeting scheduler application, for example, to be relevant, because there is very little chance that a target could attack the issues when the application is not external-facing. Log-forging attacks in the same application would also be considered unnecessary and suppressed for the same reason.

[ Also see: Put security in DevOps first, not as an add-on ]

Automate the triage as much as possible

To ensure that context is considered, developers should get warnings for the code they are currently working on and create rules to automatically triage any issues. Any machine-learning aficionado will note that this is a classification problem. Static analysis management systems that incorporate machine learning can create their own classification rules to hide warnings that are likely false positives.

The rule for high-performing DevOps teams: Never make the same decision twice. If a certain bug case is considered to be not significant, then all bugs that match that class and context should also have a reduced importance.

By immediately triaging issues and flagging only the most important defects, developers can maintain a DevOps pace without adding risk. In fact, by incorporating security into the DevOps cycle—often referred to as "DevSecOps"—risky software flaws are more likely to be fixed.

In Google's FixIt case, for example, developers were more likely to consider issues flagged immediately at compile time to be more critical. Nearly three-quarters of respondents to a survey at the time considered the warnings to pertain to "real problems," according to the Communications of the ACM article. With code checked into the repository and scanned out of band, only 21% of issues were considered significant.

Giving developers clear instructions on how to fix the flaw helps as well. In the Google case study, for example, most developers who received automated patch advice—57%—felt it was helpful, while only 2% felt it just created more work for them.

Every company has a different risk level, so there is no one-size-fits-all approach to automating the triage of some fraction of vulnerability alerts. SQL injection attacks, for example, are typically addressed by Fortify customers regardless of the risk profile of the application.

Do your DevOps road clearing

Smoothing the road as much as possible for the developer is a key with DevOps, and triaging by using machine learning is a key component to ensuring that any agile development process produces secure software.

For more on this subject, drop in on my session, "Do your pipelines remember? They must if you want to go fast with static analysis," at SecureGuild, the online security conference for testing professionals, May 20-21, 2019. Can't make the date? All registered users have full access to session recordings after the event.

Keep learning

Read more articles about: SecurityApplication Security