Micro Focus is now part of OpenText. Learn more >

You are here

You are here

6 ways to eliminate the most common security #fails in mobile apps

Rob Lemos Writer and analyst

Looking for something new, JD Glaser left his security company in 2008 and decided on mobile game development. Through the lens of his security background, Glaser saw a lot of problems in mobile app development practices. Code snippets with simple vulnerabilities are passed around programming forums and incorporated into code, spreading bad development practices, he says.

"I've done it myself. There was a way of programming in JavaScript that I did for a long time that was not secure," Glaser said. "You need to know what you are doing, or it will come back to bite you later."

Security vulnerabilities are common in mobile applications. According to data collected by security firm FireEye, the JavaScript-Binding-Over-HTTP (JBOH) vulnerability that could allow the proverbial coffee shop attacker to inject code into certain applications affected 31 percent of popular Android applications. Additionally, an assessment of 61 different mobile apps by the application security firm Denim Group found that all had at least one critical vulnerability.

"A major contributor is simply not accounting for security in the first place and trying to retrofit it later," Glaser says.

Luckily, there are a lot of places to learn about secure mobile app development. The OWASP Mobile Security Project gives developers a great deal of information on the most common coding mistakes and how to avoid them. To find potential vulnerabilities, third-party tools can scan code in real time to highlight potential security issues, while more complex tools can analyze an entire codebase or the execution of the resulting binary.

These are valuable tools because they give the developer access to the collective intelligence of the security community, says Charles Henderson, vice president of managed security testing company Trustwave.

"Use the information that is out there," says Henderson. "We find far fewer vulnerabilities with the developers who have read through the literature."

Application security experts recommend a number of steps mobile app developers should follow to help secure their code. Here are the six most common.

1. Design security into the mobile app

The first step should always be to consider security during the application design stage. Relying too much on data stored on the client, for example, can offer up a vector of attack for a variety of bad actors, according to game developer Glaser.

"I have become a big believer that you can't retrofit security, which is how most things get done," Glaser says. "You build an application—maybe you do some testing—but that is very difficult and very costly."

In addition, developers should learn about the proper way to securely code certain aspects of their product. Glaser, for example, wrote a book focused on the security considerations for creating mobile apps with JavaScript and PHP. OWASP has created simple set of rules to help developers avoid a common vulnerability that affects both websites and mobile applications: cross-site scripting flaws.

By considering this advice at the beginning of a project, developers can more easily and cheaply create secure applications.

2. Test each iteration of the product

Once a secure design is created, developers should make sure their code doesn't result in vulnerabilities. Frequent code scanning (not just at the end of the project during the quality assurance stage) and threat modeling can help detect any vulnerabilities or design flaws that creep into the application, says Sriram Ramanathan, chief technology officer at Kony, a maker of mobile app development tools.

When the company makes its tools, they follow a secure design lifecycle that incorporates a lot of testing.

"We document a very clear set of security use cases, and then we design abuse cases, which are ways of testing the products," he said. "The design process for us includes a threat model, which defines the threat vectors, and then we come up with means of engineering mitigations and we test for those issues."

As part of testing, developers should also run their application and monitor network traffic. Often, coding libraries and advertising frameworks can perform insecure activities, which are revealed through monitoring.

3. Encrypt data stored on the device

Poorly implemented encryption is a major problem for many mobile apps. Just ask Starbucks. In 2014, a security expert found that the company's mobile app left users' data unencrypted on the device. Historically, mobile apps have struggled to protect data due to oversights, such as not implementing encryption on the connection to the server and not storing authentication credentials securely.

"You might not be able to guard that information while the application is running, but in terms of the way you store data and the way that you transmit the data off the application, it is vitally important that you think of data in those terms and encrypt it," Trustwave's Henderson says.

When considering what communications and data should be encrypted, developers should consider how to protect data if an attacker gains control of the application. Programmers have to view themselves through the eyes of the attacker, he says.

4. Identify and actively manage third-party libraries

Developers should use a system to regularly check for updates in the third-party code they use in their product, so the code remains current with the latest versions. Failing to do so could leave a known security hole in their products that attackers can exploit.

Keeping up to date on a handful—if not dozens—of coding libraries and application frameworks is difficult. Once the effort of keeping up with third-party libraries is understood, more companies may want to pare down the workload by minimizing the amount of third-party code in their applications, says Theodora Titonis, vice president of mobile for application security provider Veracode.

"Having insight into what those third-party libraries are doing is critical and asking whether the functionality is really necessary" she says.

5. Minimize the attack surface area

That is an attitude that should apply to other facets of mobile development as well, says Adrian Mettler, a development engineer on the mobile team at FireEye. Developers should look to not use broad frameworks but minimize the functionality of the mobile app to just the capabilities needed, essentially shrinking the opportunities for attack, a concept also known as minimizing the attack surface area of the application.

A mobile app, for example, doesn't need to trust many certificates. In many cases, a company can hardcode trusted certificates into the software. Known as certificate pinning, this technique could, in the case of the JBOH vulnerability, eliminate the threat of an attack.

"The way that the developer can ensure that they are protected against that [the WebView issue] is to make sure that they only load trusted sites that they control," Mettler says. "If you are careful to properly validate SSL connections and only load those pages...then there is no way for the attacker to get malicious code into the application through WebView."

6. Obfuscate the code

Finally, developers can adopt a number of techniques to harden their application against attackers' efforts to reverse engineer the code. Obfuscation, which turns the code into indecipherable gibberish, raises the bar slightly for attackers. As Mettler points out, why make it easy for the bad guys?

"If you make it difficult enough to reverse engineer your application, it may make it less likely that there is a trojanized version floating around somewhere," he says.

In the end, creating secure mobile applications boils down to education and resolve. Developers need to take the time to learn about secure application development and the common vulnerabilities and security weaknesses that creep into applications. Only by incorporating security into their development process—whether through secure design review of third-party libraries, or the simple step of obfuscating the resulting code—can programmers create applications that do not empower attackers, but resist them.

Keep learning

Read more articles about: SecurityApplication Security