Micro Focus is now part of OpenText. Learn more >

You are here

You are here

Epic DevSecOps fails: 6 ways to fail the right way

public://pictures/John-Mello-Journalist.png
John P. Mello Jr. Freelance writer
 

No one likes to fail; we'd much rather succeed than not. Failure, though, is part of the human condition—and, as a new book says, maybe we're better off because we can't avoid it, wrote Mark Miller, editor of a new 180-page book from DevSecOps Days Press titled Epic Failures in DevSecOps.

"We learn more from failures than we do from successes."
Mark Miller

"When something goes as expected, we use that process as a mental template for future projects," he said. Success actually "stunts the learning process" because we think we have established a successful pattern, even after just one instance of success. This, in turn, has a tendency to morph into "this is the only way to do it," he said.

If something goes wrong, "we have to scramble, experiment, hack, scream, and taze our way through the process," he said. "Our minds flail for new ideas, are more willing to experiment, are more open to external input when we're in crisis mode."

Whether it's mistakes in testing or coding, not only can you learn from your mistakes, but you can also learn from those made by others, which is what Epic Failures in DevSecOps is all about. Here are six recommendations for doing failure the right way.

1. Partner, don't demand

Security teams can get a reputation as the "folks who say no" when they drop in developers' laps piles of software security bugs and demand they be fixed. Understanding the other person's point of view and the needs of the business are essential to achieving a security team's goals.

It’s a really different approach to go into a room and say, "Listen to what I have to say. This is what I need you to do," versus going into a room and saying, "Here’s a problem that we need to work together to solve. Here's how I think we should approach it as a group." So wrote Caroline Wong in her contribution to Epic Fails: "The Security Person Who Is Not Invited into the Room."

"It's only by partnering with others that we can secure the technology that we build, buy, sell, and operate," wrote Wong, who has been a manager at Zynga, eBay, and Symantec. She's currently the chief security strategist at Cobalt.io, a penetration-testing-as-a-service company.

Steve Wolf, the senior director of application security at accounting, consulting, and wealth management firm Moss Adams, agreed. Getting buy-in and jointly prioritizing security and functional commitments at a high level "help to maintain an even workload on the implementation team," he said.

He also recommended clearly communicating to the team in a meaningful way with negotiated, reasonable deadlines for implementation.

The "folks who say no" perception is especially prevalent in organizations that have silos among development, ops, and security, said Sherif Koussa, founder of Software Secured. He said that in those environments, it's believed that the only way to cross the fence and get things done is to go to war with other teams.

DevSecOps promises to break the silos among dev, ops and sec, he said.

"However, from my experience, the practice itself has no chance of doing that if a culture of cooperation between teams does not exist first."
Sherif Koussa

[Also see: The state of DevSecOps: 5 best practices from the front lines ]

2. Have a plan

When integrating security tools into the DevOps pipeline, make sure you have a plan to address scaling issues.

In his chapter in Epic Fails, "The Problem with Success," DJ Schleen, a DevSecOps evangelist and security architect at a large healthcare organization, explained how a DevSecOps program went off the rails.

They began by scanning a handful of applications and then expanded to over 500 overnight, he wrote. "We weren’t prepared for the load."

The entire enterprise was on board with integrating the tools they were providing, he wrote. 

"We built it. Everyone came, and we weren't ready. This turned out to be our problem with success."
DJ Schleen

Schleen said that a successful DevSecOps program is hard to deploy. Accept the fact that "you’re going to fail, your solutions will fail," and that there's a balance that needs to be maintained between availability and performance, he said.

"A lot of implementing security is turning over rocks and dealing with what you find," said Daniel Kennedy, research director for information security and networking at 451 Research, an information technology research and advisory company.

Of course, you should do a full evaluation of your tools, he said. For example, you should ensure that your static application security testing (SAST) provider is plugging into the DevOps tools your internal developers are already using. But also consider that you may need to "make adjustments to your approach as you go," Kennedy said.

For the discouraged, Schleen had this word of cheer:

"Even though there are risks integrating any tool into your automated pipelines, there's nothing that can't be overcome with a bit of planning and patience."
—DJ Schleen

3. Be bold. Lead change. That takes honesty.

Those are the recommendations that independent DevOps consultant Aubrey Stearn made in her chapter, "The Tale of the Burning Program," which details the trials and tribulations of transforming the development process in an organization.

She maintained that bold leaders who want to change the culture of software development in an organization need to control the narrative of change. "I didn’t magically fix a whole program or even every single component, but I did set a clear boundary to tell a specific story, a powerful software development story, one with a beginning and an end," Stearn wrote.

"Don’t underestimate the power of this cultural shift."
Aubrey Stearn

Building on honesty and transparency, Stearn was able to create a quality software development process. This happened with an internal team that could take a small shim of functionality, originally written as 15 logic apps on an Azure service bus, and replace them with four microservices with strong testing and well-defined coupling.

A new narrative about a development team can have a positive impact on its members. "People started to tell me how they had seen changes in people in my team; ... they were smiling and talking about how much they loved what they were doing," Stearn wrote in the book.

Building honesty and transparency into the software development culture of an organization can be a difficult task.

"Usually the app sec stuff is being pitched and pushed forward by a security person who ultimately is not going to be the primary user of the tools. Developers, quite rightly, have a fear of anyone who doesn't actively write code rolling in with lint tools to tell them how to do their job."
—Daniel Kennedy

Kennedy advised that security teams work hard with the development teams to make sure they feel integrated into the vendor selection process, can voice requirements about integration, and make the focus of the project developer enablement. This is in addition to implementing the control requirements security wants.

No business leader wants to build secrecy and dishonesty into their cultures, said Software Secured's Koussa.

"I would expect most organizations to want transparency and honesty. However, different characters, egos, politics, and unaligned agendas get in the way. There is no piece of technology that can solve that."
—Sherif Koussa

4. Threat modeling requires buy-in from dev teams

Despite early buy-in by developers at his organization, once threat-modeling was rolled out, it became an epic fail, wrote Edwin Kwan, the application and software security team lead at Tyro Payments in Australia.

"Not long after we introduced it, the teams started having threat-model fatigue and were avoiding it," he wrote in his chapter, "Threat Modelling—A Disaster."

Even if they had a good experience with a newer, small application, once they tried to threat model a legacy application, "it left them with a sour taste," he wrote. "Like that swig of cold coffee from last week you accidentally took because you forgot to clean the mug off your desk."

He wrote that developers found the process tedious, boring, and irrelevant to the security of their apps, mainly because all threats were treated the same, whether they were serious or minor.

Using feedback from the development teams, he wrote, a new threat-modeling process was designed. It eliminated the kind of repetition that made the exercise tedious and identified truly important security concerns. "The teams were able to get behind the new approach because it demonstrated its value," Kwan wrote.

The new approach also incorporated more automation. The more you can automate and remove the friction, the more likely it is to succeed, Kwan wrote. In fact, automation for simplification, repeatability, and speed is "a key ingredient" for shifting security to the left, he said.

[ Also see: Getting to DevSecOps: 5 best practices for integrating security into your DevOps ]

5. Red-teaming requires trusted relationships

Introducing red-teaming into Fabian Lim's organization brought distrust into the DevSecOps process. Lim, a DevSecOps engineer and author of the chapter in Epic Fails titled "Red Team the Culture," wrote that the process was sprung on developers without any orientation.

"We had no clue who the red team was and how it was going to be conducted clearly," he wrote. "We were notified with an email saying: 'There will be a red teaming exercise for this two weeks. Be prepared.' Without any advice, we were clueless about what we should and should not do. We were clueless about the intentions of this exercise."

Needless to say, the red team made the developers look bad. Red-teamers social engineered their way into the company's building, installed malware on developers' laptops, stole source code and confidential information, and leveraged the infected laptops to access other machines on the network. "The red teamers had slipped under our radar and pillaged our resources," Lim wrote. "We did not detect or report any suspicious activities."

What's more, the red team offered no useful remediation to the problems they found. "It felt like they were [at] arm's length when it comes to remediation," Lim wrote. "It felt like they did not care to provide any relevant solution for the developers and expected the developers to find solutions for ourselves."

Although the exercise exposed security flaws in the development process, it damaged the working relationship between the company's security team and developers."We experienced various push backs, hurt feelings and even the loss of trust," Lim wrote.

Even with these issues, he wrote, "there were positive outcomes and lessons learned, although the price tag might have been too high."

When a new process like red teaming is introduced into an organization, the human element, like the developers, is often ignored. Usually individuals involved in this process of transformation have the proper training to deal with process or technology problems, but they're not necessarily "trained to handle the human part of the transformation," Koussa wrote.

6. Don't over-engineer your security solution

In his chapter, "Unicorn Rodeo," Stefan Streichsbier, founder of GuardRails—a platform for orchestrating open-source security tools—tells the tale of building an elaborate security integration testing scheme and central security integration repository. These were intended for a large application with dozens of microservices and a staff of more than 120 people.

Unfortunately, his security team's work was for naught. Application staff turnover and a switch to a new source code management system prevented the team's months of work from seeing the light of day.

"Understanding and respecting culture is the key to success in DevSecOps, and culture equals people."
—Stefan Streichsbier

No matter how great you think your solution is, it has to be built for the right people, he said. Spend time on identifying influential people in the development team who can become security champions, Streichsbier said.

And don’t waste time on over-engineering a security solution, he said. Treat it as small experiments that have to be validated. Keeping things simple is the best approach, Moss Adams' Wolf agreed. However, he said, "incremental implementation should be driven by acceptable risk."

Align the steps or experiments with criticality, so that the most serious risk is addressed or mitigated first. Do this while keeping in mind that some residual risk remains during the time the solution is being implemented, he said.

Learn from others' fails

There's no shortcut to learning, editor Miller wrote in wrapping up Epic Fails. But, he added, "let's agree it’s not necessary for us all to make the same mistakes." Learn from others, share what you've learned.

"If all goes well, you’ll have your own failures to brag about."
—Mark Miller

Keep learning

Read more articles about: App Dev & TestingDevOps