Micro Focus is now part of OpenText. Learn more >

You are here

You are here

3 application security fundamentals every developer should know

public://pictures/Robert-Lemos-Technology-Journalist-Lemos-Associates.jpg
Rob Lemos Writer and analyst
 

Academic researchers and developers don't often intersect, but researchers presenting at the recent USENIX Security conference presented some interesting findings that hold lessons for application programmers and development teams.

One team found that simple coding errors are less common than fundamental security mistakes, while another found that most mobile developers still do not use security tools and processes to check their code for bugs. A third group of cybersecurity specialists from the US military presented lessons in training service members to find vulnerabilities.

Here are three fundamental lessons from the USENIX Security conference for developers and application-security specialists.

1. Focus on best practices and code review

Developers get a lot of criticism for writing insecure code, but application security professionals need to take much of the responsibility for coding failures and vulnerabilities. Rather than focus on teaching developers complex security models or rely exclusively on security tools, app sec experts should work to codify best practices and document the consequences of not following those recommendations, make code reviews part of the development process, and simplify the APIs and libraries to reduce mistakes, concluded a research group from the University of Maryland, College Park.

Here's what Daniel Votipka, a PhD student at UMD's Security, Privacy, and People Lab (SP2), said during a presentation at the USENIX Security conference:

"We should be asking, How do we make [secure] programming easier ... and how do we improve the effectiveness of these solutions?"
Daniel Votipka

Votipka and his colleagues analyzed 94 submissions to the Build It, Break It, Fix It contest, which pits teams of developers against one another in a race to build an application with specific functionality, then allows them to find vulnerabilities in other teams' software, and finally tasks them to fix their own vulnerabilities. After analyzing the 94 submissions, the teams found 866 exploits linking back to 182 vulnerabilities.

Complex implementation flaws (such as failing to conduct an integrity check or considering side-channel leakage) and conceptual errors (such as failing to choose a random initialization vector) were the most common issues. Failing to implement certain security features, choosing an insecure library or encryption algorithm, and making simple mistakes were all less likely.

Votipka said code review is beneficial in catching mistakes.

"When we looked at what teams did—performing best practices like not copy and pasting their security checks all over their codebase but only having them in one place—that also reduced the number of mistakes. All of the mistakes were identified during the Break It phase except for one, indicating that code review is helpful in preventing these errors."
—Daniel Votipka

2. Perform code security checks, especially threat assessments

A group of researchers from European universities used data scraped from the Google Play app store along with surveys to analyze Android developers' focus on security, in an attempt to find correlations between several characteristics and security outcomes. The researchers invited more than 55,000 developers to complete a survey, analyzing more than 300 validated responses from developers and conducting binary analysis on those developers' applications.

Unsurprisingly, the researchers discovered that most mobile-application developers do not focus on security. Fewer than half of the survey respondents used any of the five most cost-effective security-assurance techniques—which include identifying known vulnerabilities in libraries, doing penetration testing, conducting threat assessment, doing a code review, and running an automated code review—on every software release, the researchers found. Developers most often did some sort of automatic code review, followed by automatic detection of vulnerable libraries, the researchers said.

Charles Weir, a PhD student from the UK-based Lancaster University, said in a presentation:

"That suggests that developers are driving the adoption of security as experts would have had them doing threat assessments too."
Charles Weir

The focus on security is mainly coming from developers, with 61% of respondents conducting the security checks themselves, while pressure from customers to fix security issues accounted for only 13% of cited reasons for security changes. While nearly half of developers also cited the requirements to abide by the European Union's General Data Protection Regulation (GDPR) as a reason for security changes, those changes were mostly cosmetic, such as changing the privacy policy or adding pop-up dialog boxes, the researchers found.

The researchers also discovered that fewer than a quarter of developers had access to a security expert, but having more security requirements resulted in more frequent use of assurance techniques and more frequent software patches and updates.

3. Focus testers on the low-hanging fruit—breadth, not depth

Many vulnerability researchers take a risky depth-first approach to finding vulnerabilities, searching for issues in software that has not demonstrated vulnerability, because the payoff is so valuable—a working exploit. A group of cybersecurity experts from the US military presented an alternative method—breadth-first—that uses automation to find likely software issues in a broad selection of programs, using apprentices to whittle down those issues using well-known tools, and then focusing more expert analysts on the most probable vectors of attack.

In an experiment with a dozen hackers, the researchers found that testers who focused on a breadth-first approach found an order-of-magnitude increase in bugs compared to those focused on deep analysis of fewer types of issues or portions of code. In addition, the breadth-first testers produced much more documentation on code issues.

Timothy Nosco, a cybersecurity expert with the US Army who presented the research, said the depth-first search approach, where out of many potential targets the team selects a single one and uses a lot of time and energy to find bugs without any prior indication of success, is a lot like diamond mining.

"The payoff is high, so the team is willing to invest a lot in searching and digging."
—Timothy Nosco

The breadth-first approach not only finds more issues, but is well suited to a common situation among development teams and military organizations: having to train new developers or vulnerability researchers to find bugs as well as recognize vulnerable code.

Put these fundamentals into action

While academics are not always known to produce actionable conclusions, these three presenters offered clear steps that development teams can take to improve the security of their code. 

Building development processes that incorporate security best practices and application security policies can be more successful than training individual developers in security methods, and code reviews can catch mistakes that otherwise might become vulnerabilities in production code.

Developers that use one or more common security measures every time they release software are doing better than half of development teams in general. Finally, the majority of teams that have programmers and engineers with different levels of security experience can benefit by using a variety of approaches to find low-hanging fruit, rather than drilling down on a specific type of vulnerability, according to the researchers from the US military.

Keep learning

Read more articles about: SecurityApplication Security