Micro Focus is now part of OpenText. Learn more >

You are here

You are here

Build in app sec like a pro: 5 key takeaways from BSIMM 11

public://pictures/John-Mello-Journalist.png
John P. Mello Jr. Freelance writer
 

Whether you're just starting to get your software security initiatives in place or you want to compare your organization to its peers, the latest BSIMM tool can offer up some instructive takeaways.

This year's edition of the Building Security In Maturity Model—BSIMM 11 (PDF)—analyzed security practices at 130 firms in multiple industry verticals, including financial services, independent software vendors, cloud, healthcare, fintech, Internet of Things, insurance, and retail. It describes the work of 8,457 software security professionals, who in turn guide the efforts of over 490,000 developers.

BSIMM 11 made evident how organizations are actively working to speed up software security activity to match the pace of software delivery.

In the conventional model of on-premises development, coding and testing are separate. The code is written by developers and tested by the security team.

That model no longer holds up. "In a DevOps, continuous-build environment, there's no way to apply those controls in that way," said Jim Routh, CISO at the life insurance company MassMutual.

The right way is for cybersecurity professionals to design instrumentation to enable the developers to measure quality. Then, accountability is where it's needed. The development leader becomes accountable for the quality of what's being built, Routh said.

The trend of developers being in the driver's seat for application security is reflected in these five key takeaways from the BSIMM 11 project.

1. Let engineering lead software security efforts

Matt Trevors, a technical manager at the Software Engineering Institute of Carnegie Mellon University (CMU) in Pittsburgh, explained that engineering-led development teams contribute to DevOps value streams in pursuit of resiliency.

This approach also works for developing shared responsibility for the security of a project. "It's not just the person with the security title on the team being held accountable for security," Trevors said.

"Sharing responsibility means sharing the accolades when you achieve success. It's very important to developing a sense of ownership and pride in the security of the application or system you're developing."
Matt Trevors

Shared responsibility is a cornerstone of DevSecOps, said Larry Maccherone, DevSecOps transformation initiative lead at Comcast. "The whole idea of DevSecOps is you want the engineering team—a single entity that can think holistically—to own the problem of security and operations," he said.

"The reason that it's more powerful than having a separate security group is you get a less confrontational situation, and you get more holistic decisions."
Larry Maccherone

His research shows that the holistic approach is six times more effective in producing vulnerability-free products than the dedicated, external security group approach.

Sandy Carielli, a principal analyst with Forrester Research, pointed out another problem with the approach of having a team that's led solely by security: It can't scale to meet the requirements of DevOps.

"The security team is further from where the software is being developed. If you follow a champions approach, where developers close to the product are trained in security principles and serve as local go-tos, you can scale the security team more effectively."
Sandy Carielli

2. Software-defined security governance is now required

The movement to a cloud-first development model has made software-defined security governance an important activity. "IT organizations in the enterprise assume that developing in the cloud is the same as developing software on-prem. That's false," MassMutual's Routh said.

Shawn Smith, director of infrastructure at nVisium, an application security vendor, said that separating organizational resources by team and owner becomes far easier when using the ephemeral resources from cloud providers. What's more, they allow for strong controls that are enforced programmatically from the outset.

"Previously, policies had no teeth without an auditor constantly ensuring they were enforced. Now, policies can be implemented and enforced programmatically."
Shawn Smith

Several key developments have taken software-defined governance out of the aspirational realm and and into reality. New security tools that let developers and DevOps teams codify security controls and checks as part of building their workflows allow governance to be enforced throughout the entire lifecycle of applications and infrastructure, said Wei Lien Dang, co-founder and chief strategy officer for StackRox, a maker of a security platform for containers and Kubernetes.

Adopting cloud-native technologies such as Kubernetes that are based on declarative APIs allows governance policies to be easily configured, Dang added.

In addition, he said, companies can implement community-led and ecosystem-wide standards and benchmarks for security best practices to ensure effective governance and risk management.

3. Security is becoming part of a quality practice

When developers take on more responsibility for security, they see vulnerabilities in a different light. "Frequently, a security group's definition of a problem is in terminology that's hard for a development team to understand," Comcast's Maccherone said.

When an engineering team owns security, he said, that changes.

"They use terminology familiar to them and tend to think of security as just another aspect of quality. 'A vulnerability is just a bug. Here's how we deal with bugs.'"
—Larry Maccherone

Forrester's Carielli added that in certain industries, resilience has always been a primary goal. Think about industrial equipment, automobiles, or medical devices, she said.

"Failing safely and recovering quickly could be the difference between life and death. As software gets added to these traditionally hardware products, resilience becomes as important as security and quality."
—Sandy Carielli

4. CSPs' shared-responsibility model doesn't always work for engineering teams

When applications are developed on premises, an engineering team can order up a development environment that includes things such as secure network configurations and identity and access management. But in a cloud-first development model, including those used by many cloud service providers (CSPs), development teams make those decisions. The same holds true for other security elements, including logging, encryption, access privileges, and multifactor authentication.

"Those decisions really shouldn't be left to developers, who are ill-equipped to make them. The net result is there are lots of opportunity for exposure and information leakage."
Jim Routh

To reduce the risk created by developing in the cloud, cybersecurity engineers need to install guardrails to keep developers on the security straight and narrow. The guardrails are configuration management choices that are enforced in the provisioning of a cloud account, Routh explained.

They're designed and built for reuse across development teams, he added. The policy is embedded in the configuration management choices reinforced by scripts.

While the shared-responsibility model may add to the difficulty of managing risk, Maccherone argues that the benefits of working with CSPs outweigh the disadvantages.

"AWS, Azure, and Google Cloud are going to screw up sometimes and expose you to more risk. But by handing over security responsibilities to these very motivated and highly resourced third parties, you're likely to get better results, even though you have less management of the risk."
—Larry Maccherone

5. Automation can address people and skills shortages

The use of automation, such as with as bots or sensors, can be very effective in addressing the security manpower shortage when done correctly, CMU's Trevors said. But if you're not evaluating the resiliency of the AI or ML you're using, it could be detrimental, he added. 

For example, if the training sets used for your machine learning algorithms are poisoned by an adversary, then the actions taken by sensors and bots will be incorrect.

System defenders aren't alone in using automation to achieve their ends; attackers are using it, too.

"There is no way for a human to keep up with an AI adversary, so it's going to become compulsory to have some form of bot sensors and automation in your environment just to keep pace with your adversaries."
—Matt Trevors

You can also use automation to enrich the jobs of existing security team members. It allows security teams to push repetitive processes off their plate and focus on the more specialized, strategic issues, Forrester's Carielli said.

nVisium's Smith cautioned, however, that automation can't totally replace human expertise.

"Automation will almost always have false positives at one point or another in the software development lifecycle. As such, the more complex and niche issues will mostly not be discovered only by automation."
—Shawn Smith

'Shift left' is becoming 'shift everywhere'

Bringing security into the software development lifecycle earlier and earlier has been gaining momentum for years. But now, BSIMM 11 found, "shift left" is becoming "shift everywhere."

"Ultimately, shift-left is part of a broader trend within organizations to move towards a DevSecOps-type model that recognizes security is everyone's responsibility," StackRox's Dang said.

Development and engineering teams "increasingly have a significant role in implementing security," but effective security approaches require collaboration between these teams and security operators, he added. "The end goal is to have common, standardized workflows, tooling, and languages that all teams can use to protect their software environments, enforce policies, and reduce risks."

MassMutual's Routh maintained that shift left is no longer absolute. If there's a build every day, then development and security teams must decide whether to fix defects in the current 24-hour version, in the next version or some future version, he said.

"The right choice may be to leave the defect alone and fix it in a future build. That's not shift left; that's shift right. What does this mean for software security? It means shift is good."
—Jim Routh

Keep learning

Read more articles about: SecurityApplication Security