You are here

What your DevOps team needs to know: 4 lessons from exploited vulnerabilities

public://pictures/Robert-Lemos-Technology-Journalist-Lemos-Associates.jpg
Robert Lemos, Freelance writer

In March 2019, an online intruder discovered that cloud storage used by financial giant Capital One suffered from a vulnerability in the firm's application firewall and storage servers, allowing the attacker to send commands from one to the other.

The exploitation of the vulnerability—an issue known as server-side request forgery (SSRF)—resulted in the leaking of sensitive personal information on more than 106 million individuals in the United States and Canada. Capital One had misconfigured its web application firewall, failing to prevent privileged access to the company's cloud storage server hosted on Amazon Web Services.

This class of vulnerability was well understood and easy to exploit. It should have been a priority to fix. But the takeaway from this incident is how easy missing an important security flaw can be, said Frank Kim, instructor and curriculum lead at the SANS Institute, a technical training organization.

"Even Capital One, a leader in DevOps security, makes mistakes. There are so many vulnerabilities, and companies are trying to push out so much functionality. We need to be able to close the window of vulnerability more quickly."
Frank Kim

Most companies are doing credible jobs of finding and fixing vulnerabilities in code, ensuring deployed servers are not misconfigured, and putting defenses in place to protect networks and data. The problem, however, is that a single misconfiguration can have massive repercussions. SSRF, for example, is a well-understood class of vulnerabilities, but such attacks are hard to mitigate on cloud services, and protections did not exist a year ago.

"As we know about DevOps, it is not just the technology when we are talking about the vulnerabilities, it is the process and the technology. We can't just throw all these tools at the DevOps teams, because even though we say developers should do more security, their incentives—getting functional code out as quickly as possible—have not changed."
—Frank Kim

In November, eight months after the breach, Amazon implemented protections against such attacks.

Massive breaches are often caused by simple vulnerabilities that developers, operations, and application-security teams have missed. Here are four lessons from major breaches on how to catch the next seemingly small software bug before it snowballs into a massive breach.

[ The shift is on to secure code. Get up to speed with TechBeacon's State of App Sec Guide. Plus: Get the Gartner 2020 Magic Quadrant for Application Security Testing ]

1. Know your assets

On March 6, 2017, the Apache Foundation released a patch for a vulnerability (CVE-2017-5638) in Apache Struts 2, which allows attackers to execute commands using a specially crafted header HTTP header. Four days later, attackers used the flaw to scan servers at credit-reporting agency Equifax, compromising a key server there. From May to July, attackers returned and stole sensitive financial information on more than 145 million American adults from the company.

The cost of the Equifax breach to date: $1.4 billion.

Overall, Equifax knew about the vulnerability and had taken steps to patch the flaws across its systems. However, application-security experts have come up with a list of 29 failures that led to the eventual breach. Perhaps the two most important ones: a lack of visibility into what assets and software the company had deployed, said Jim Manico, founder of Manicode Security and volunteer project leader at the Open Web Application Security Project (OWASP).

"All that Equifax had to do was apply the patch across their network, but on a couple of servers, either they missed them or they applied the patch and it broke, so they rolled it back"
Jim Manico

Being able to determine what assets may be affected by a new vulnerability, and doing it at speed, has become increasingly important, he said. Attackers create exploits from the information they glean from patches much more quickly than they could just a decade ago. In 2006, the average time it took to exploit a known vulnerability was 45 days; in 2015, it was two weeks; and with the Apache Struts 2 flaw, it took three days, Manico said.

"The responsibility for patching has changed over the last 15 years as attackers have sped up their ability to go from a software patch to an exploit. For anyone running Struts 2, they had 72 hours to patch their systems. This is the No. 1 problem for application security, and developers continue to take it lackadaisically."
—Jim Manico

2. Maintain coverage of important classes of vulnerabilities

The SSRF issue that led to the Capital One breach demonstrates the importance of attaining 100% coverage across both the codebase and deployed applications. While SSRF can be hard to patch in many cases, the class is relatively easy for attackers to find, according to Evan Johnson, manager of the product security team at CloudFlare. He wrote in an analysis of the Capital One breach:

"SSRF is a bug hunter's dream because it is an easy to perform attacks and regularly yields critical findings. ... The problem is common and well-known, but hard to prevent and does not have any mitigations built in to the AWS platform."
Evan Johnson

While the OWASP Top-10 is a good start for a list of software vulnerability classes that need coverage, every development team has a different list. What is important is for the company to make sure it has good coverage—through tools and processes—of the vulnerability classes on the developers' list, said Dan Cornell, chief technology officer of Denim Group, an application-security consultancy.

While companies are trying to push more security checks into integrated development environments (IDEs), they should only "shift left" those tools that retain a high-level of coverage while not disrupting development, he said.

"From a coverage standpoint, you can test for certain vulnerabilities at a certain level of quality, so there are certain types of checking that you can push left. But there are some things that take too much time to integrate into a pipeline—like full static analysis to find SQL injection issues—to move onto the developers' plates."
Dan Cornell

[ Find out how to scale your application security program in this May 12 Webinar. Plus: Learn how to build application security into your software with TechBeacon's guide ]

3. Teach security continuously

Development teams usually do not have the incentive to prioritize security in their daily work. Instead, security tends to get in the way of implementing features and do so on deadline.

Security is a tax on developer productivity, but if done correctly, it only has to be an incremental one. Addressing issues early helps minimize the impact on the developer and the impact on the company, which could be very substantial, said Tim Mackey, principal security strategist with vulnerability-management firm Synopsys.

"When I've talked to organizations that had a data breach, they say they have had to spend 50% of the time following a breach, fixing the issues—we are talking about impacts of that magnitude. Developers need the continuous positive reinforcement in their IDE ... where you can make them feel incremental pain and get them information in context so they do the right thing, because they may not know how."
Tim Mackey

Security requirements for developers should also be explicit and actionable, said Nidhi Shah, principal security researcher at Micro Focus. Application-security teams should conduct threat modeling, work with the development team to create the architecture for the product, and then turn those considerations into part of the specification, she says.

"The developer should not have to translate architecture requirements into tasks during development. Security should instead be specified as part of the feature template that the developer needs to produce."
Nidhi Shah

4. Strive to make patching painless

The lowest level of application-security maturity is not patching, which should be unacceptable for any software development team. Companies whose application-security teams track open-source components, triage patches, and include coding feedback in the integrated development environment are above average, at Level 2.

Yet these scanners miss things.

The highest level of maturity for OWASP's Manico is not depending on the scanners, but lowering the disruption of patching by automation

"Attaining this level of maturity makes your security posture the best, but it also makes your functionality the most fragile, because you are changing every day. If you update every day, you need two things to go fast: one, rigorous security and functionality automation, and two, the willingness to fix dependencies in the open-source world."
—Jim Manico

Yet getting to the point where everything can be patched quickly and relatively painlessly may be out of reach for most development organizations, said Denim Group's Cornell.

"It is easy to give the advice that you need to patch everything, but it is harder to follow at scale, and there are some systems which cannot suffer a disruption. When you patch a server and the Oracle database does not come up and you are losing thousands of dollars a minute, that will make you more hesitant to patch quickly."
—Dan Cornell

In the end, application-security teams need to recruit developers to their cause by making it easy to learn security and integrating security tools that do not get in the way of development.

We need to provide developers help, says Micro Focus's Shah.

"If security is not part of the functional requirement, they are not going to implement it. Deciding whether we can upgrade, whether the vulnerability is applicable to our software—those are questions that often the developer is not able to determine."
—Nidhi Shah

[ Take a deep-dive with our Application Security Trends and Tools Guide. Plus: Get TechBeacon's App Sec Buyer's Guide. ]