Micro Focus is now part of OpenText. Learn more >

You are here

You are here

Scale your security with DevSecOps: 4 valuable mindsets and principles

public://pictures/clint.jpeg
Clint Gibler Research Director, NCC Group
 

Modern software engineering methodologies allow development teams to create new features, iterate faster, and provide value to customers more quickly than traditional methods. But sequential and manually driven security approaches have failed to keep up.

This has caused some security teams to resist these new development processes and others to experience sleepless nights about all the unreviewed code getting shipped to production. However, many teams have embraced the shift by practicing "DevSecOps," an overarching term for changes in people, processes, and tools that help security teams similarly move more quickly and give them better leverage in their efforts to keep customers safe.

If your team has not yet embraced DevSecOps, you can learn from the many companies now sharing their lessons learned in their engineering blogs or conference talks.

I've been immersed in this space over the past few years in my work at NCC Group. I've spent a few hundred hours reading related blog posts, watching conference talks, and having in-person conversations with security professionals at various companies, in addition to my day-to-day work as a security consultant, advising companies on how to improve their security postures.

There are many important areas to consider when embracing DevSecOps. Here are a few of the most effective mindsets and principles I've found.

1. DevOps is inevitable: Embrace it

The first stage of grief DevOps is acceptance. Regardless of how you feel about agile and DevOps, they're here to stay. There is simply too much business value in being able to iterate quickly and ship customers new features faster.

For security teams to be part of important engineering conversations that influence architecture decisions and big-picture strategic directions, security needs to speak the same language, understand the development process, and figure out how to add value without slowing down the development process. If you are trying to slow down the adoption of CI/CD tools or containers, dev teams will find ways to work around you and will not involve you in future conversations.

In most companies, security cannot (and should not) block an engineering decision or a release unless there's a known critical vulnerability that could be devastating to the business. "But I'm not done reviewing it" or "The security scanners haven't finished yet" are generally not acceptable excuses.

So instead of fighting it, adapt! How can you leverage the existing infrastructure and processes of development teams to add additional security checks and controls? Here are a few examples:

  • Integrate continuous lightweight code scanning into CI to find bugs and vulnerable dependencies early.
  • Integrate dynamic scanning into CD in the QA environment to find additional bugs in applications.
  • Use container scanning to detect container images with known vulnerabilities.

 

[ Special Coverage: DevSecCon Seattle 2019 ]

2. Build guardrails, don't be gatekeepers

In most companies, the security team can no longer be gatekeepers unless there is truly a critical risk to the business. Those emergency brake levers must be pulled infrequently, or else trust in the security team will be diminished, making the team less effective in the future when interacting with development teams and the broader organization.

Instead, the security team should focus on building guardrails—software libraries, tools, and processes that provide developers useful, safe-by-default primitives developers can use to do their job in an efficient and secure way.

This concept has been described as the "paved road" approach by the Netflix central engineering teams. It was discussed in Patrick Thomas and Astha Singhal's 2018 AppSec California talk, "We Come Bearing Gifts: Enabling Product Security with Culture and Cloud." (And if you like that talk, check out Singhal's more recent blog post, "Scaling Appsec at Netflix.")

Ask yourself:

  • What types of bugs do we tend to see from a given codebase, team, or the organization in general?
    • Is there something we can build that will solve those bug classes entirely?
  • What are the sharp edges in how we write software? Are there any types of functionality or features for which developers need to be very careful to not introduce a vulnerability?
    • Is there a way we can abstract away all of these complications and edge cases? Can we make it hard to do things insecurely?

In an ideal world, developers can focus on building new services and features, providing value to the organization, and "security" transparently happens around them—seamlessly and smoothly integrated into the libraries they leverage and systems they use. Developers wouldn't have to think about security; it would just happen.

3. Automate everything

This may sound obvious, but it's important to review all of the standard processes and tasks of your security team and determine:

  • What are the tasks you regularly perform?
  • What is the value you are getting out of these tasks?
  • Which of these can you partly or wholly automate?
  • Are there any of these you don't have to do, whose benefit can be realized in some other, more scalable way?

In any company, the number of developers grows much more rapidly than does the security team. Thus, it's important to organize your efforts around activities and tooling that scales the security you can provide better than linearly with security engineer person-time.

Security automation can manifest in many ways, but some examples include:

  • Performing a lightweight code scan on every new pull request (PR) and commenting on any issues found directly on the PR, similar to how peer code review is communicated.
  • Automatically fuzzing or DAST scanning all new versions of apps deployed in QA as well as all new container images.
  • When issues are identified, automatically creating Jira tickets describing the issue and the recommended fix, then assigning the ticket to the dev team that owns it.
    • Note: This can be difficult in practice, given the false positive rates of most scanning tools. Often, the results must go through a manual triage phase before the results are high-signal enough to send to developers. So consider implementing this for specific scanning approaches that are high-signal for your environment. (More on this below.)
  • Writing glue code that ties together various security tools so that they're run continuously and automatically, de-duplicating their outputs, and parsing them into a common format that's then stored in a central location with appropriate tags and meta info, such as Jira.

Create processes and tooling so that security engineering time needs to be spent only on high-leverage activities that need a human's perspective.

By investing in long-term, scalable wins that free up security engineer time, you'll be able to invest in additional tools and systems that automate further tasks. This will create a virtuous cycle of your security team becoming more leveraged and effective over time, even if it's not growing as quickly as the development teams you're supporting.

4. Prefer high-signal, low-noise tools and alerting

Whether you're building or tuning a security scanning tool or building some monitoring and alerting infrastructure, you generally face two choices that are fundamentally at odds with each other:

  1. Attempt to find as many issues as possible, at the risk of potentially returning many items that aren’t actual issues (false positives).
  2. Decide to be a bit more selective with the potential issues you report. You may miss some real issues (false negatives), but when you do claim that there's an issue, you’ll generally be right.

In an ideal world, you could build a tool that has both low false positives and low false negatives. However, there are fundamental, provable computer science reasons why this can't be done (at least for static and dynamic analysis). In practice, you need to decide between these two options based on your situation and priorities.

Many companies choose #2—they're willing to accept the risk of potentially missing some bugs if, as a result, they don't have to waste time reviewing false positives that don't add value. Zane Lackey describes this and other great ideas in his 2017 Black Hat USA talk, "Practical Tips for Defending Web Apps in the Age of DevOps." 

Developing custom tools

Some large tech companies (e.g., Google, Facebook, and Instagram) have spent months or years implementing custom static analysis tools that are specifically tuned to their codebases. These tools understand bugs that are unique to the frameworks they've developed and the business logic issues they've had in the past.

But once these tools and their associated security checks are implemented, the work isn't complete. Multiple rounds of feedback-driven iteration must follow in which the analysis engine is made more precise or exposed to different information, and the checks are then tuned to reduce noise.

If your company doesn't have a dedicated team of static analysis experts, or if you don't have one or more security engineers who can devote months to building custom tools, it's likely best to focus on determining the low-hanging fruit (bug classes) that you can reliably find in a high-signal way with minimum analysis complexity. 

If there are bugs you can find without data-flow analysis, great! If there are bugs you can find with just grep, even better!

Put your efforts in focus

While certain techniques that work in one company may not necessarily work in another, having the right mindset and principles can provide a valuable lens through which you can view other companies' solutions and apply them to your organization.

If you found this story helpful and you’d like to chat with some of the people whose work I referenced, check out DevSecCon Seattle, running September 16 to 17, where I will be on a panel with: 

  • Astha Singhal, Engineering Manager of Application Security at Netflix
  • Doug DePerry, Director of Product Security at DataDog
  • Justine Osborne, Offensive Security Technical Lead at Apple
  • Hongyi Hu, Engineering Manager of Product Security and Production Infrastructure Security at Dropbox
  • Zane Lackey, Chief Security Officer at Signal Sciences

If you have any questions, comments, or want to share some neat tips and tricks about how your company does things, feel free to reach out to me on Twitter @clintgibler. If you'd like to know when I write other content about DevSecOps and scaling security, subscribe to my tl;dr sec security newsletter.

Keep learning

Read more articles about: App Dev & TestingDevOps