Micro Focus is now part of OpenText. Learn more >

You are here

You are here

Security liability is coming for software: Is your engineering team ready?

public://pictures/Robert-Lemos-Technology-Journalist-Lemos-Associates.jpg
Rob Lemos Writer and analyst
Scales of justice
 

Software engineers have largely failed at security. Even with the move toward more agile development and DevOps, vulnerabilities continue to take off. More than 10,000 issues will be reported to the Common Vulnerabilities and Exposures project this year. 

Things have been this way for decades, but the status quo might soon be rocked as software takes an increasingly starring role in an expanding range of products whose failure could result in bodily harm and even death.

Anything less than such a threat might not be able to budge software engineers into taking greater security precautions. While agile and DevOps are belatedly taking on the problems of creating secure software, the original Agile Manifesto did not acknowledge the threat of vulnerabilities as a problem, but focused on "working software [as] the primary measure of progress."

Without a security mandate, the incentives to create secure software do not exist, said Joshua Corman, director of the Cyber Statecraft Initiative for the Atlantic Council and a founder of the Rugged Manifesto, a riff on the original Agile Manifesto with a skew toward security.

Corman said we are currently making time to market and low cost the incentives, "and people are doing exactly what they are being incentivized to do."

"There is no software liability and there is no standard of care or 'building code' for software, so as a result, there are security holes in your [products] that are allowing attackers to compromise you over and over."—Joshua Corman

Instead, almost every software program comes with a disclaimer to dodge liability for issues caused by the software. The end-user license agreement (EULA)—a dense legalistic disclaimer that only 8% of people read, according to data collected in 2011—essentially states that people do not have to use the software, but if they do, the developer is not responsible for any damages. EULAs have been the primary way that software makers have escaped liability for vulnerabilities for the past three decades. 

Beyond the EULA

Experts see that changing, however. With the increasing adoption of the Internet of Things (IoT), industrial control (IC) networks, and devices powered by machine intelligence, such as self-driving cars, developer mistakes will increasingly have physical consequences, and that spells problems for software companies that have depended on a legal shield, said Mike Ahmadi, global director of critical systems security for the Synopsys Software Integrity Group.

"[Software companies] have been able to hang on to the EULA, because 30 years ago when they first appeared, no one relied on software."—Mike Ahmadi 

That's no longer the case. Given the trends with security and liability issues, experts have three recommendations for developers in the post-EULA-protected world.

1. Developing for IC or the IoT? Think security first.

Developers creating software to power the IoT and IC networks should build security into their development process and have a risk-mitigation plan in place. Because a vulnerability in either type of device or system could have a physical impact, software makers need to expend extra effort and resources securing their products.

John Soughan, a principal for Glen Avenue Consulting and a former business lead at Zurich Insurance, said he believes that, at the very least, companies should have a significant discussion at the board level about how focused they should be on finding flaws in any code the company produces. 

"When a stock price drops and the officers and directors are sued, they will have to show that they looked at this issue and made a decision based on the legal landscape and the economic realities," he said. "Unfortunately, no one even looks at this until somebody gets hurt, and you are looking at significant damages."

2. Build incentives into the development process

Most developers define success as working code that is delivered on time. Yet increasingly the definition of "working" needs to include the concept of secure code, and the development process needs to have incentives in place to guide developers toward producing secure code. 

"We don't have a technology gap per se for resilient, reliable, dependent digital infrastructure," the Atlantic Council's Corman said. "What we have is an incentive gap, and we are not going to see something different unless we incentivize something different."

Companies need to avoid known vulnerabilities and make their software easily but securely patchable. Incentives should help keep developers focused on security as they consider programming choices, Corman said.

3. If you use open source code, it's your responsibility

Open source code poses a conundrum for liability. Placing liability on volunteer programmers who create foundational code will lead to less innovation. Yet, arguably, the developers should be responsible for the code. 

Moving into the future, however, legal precedent will likely result in companies absorbing the risk of open source code, said Synopsys' Ahmadi.

"It is ultimately, from the manufacturer's perspective, the responsibility of the person getting financial gain from the software," he said. "It won't matter if you are using third-party components in your software, because, in the end, it's your software."

The days of software companies shifting responsibility for vulnerabilities will likely end within the next decade, if not much sooner, he said. For developers, that means learning about security, designing security into your software, and continuously testing to detect security issues.

Software security liability is on the radar

Ahmadi said he sees some notable signs of the change: Two security researchers compromised a GMC Jeep in 2015, and more recently potential security flaws led to lost data due to the WannaCry ransomware. The specter of a hackable medical implant has worried technology-literate doctors and patients, so much so that former U.S. Vice President Dick Cheney had his pacemaker's wireless capabilities disabled to prevent hackers from using it against him.

"We have been predicting what will go wrong for a long time now. And it seems that in just the last year, just about everything that we predicted would go wrong, has gone wrong—save attributable death. There will be software liability, with absolute certainty, at some point in the future."—Mike Ahmadi 

While attempts to create a liability framework through legislation have largely failed, the possibility of a human casualty due to a software flaw brings with it the increasing likelihood that product liability will be settled in a court case, said Glen Avenue's Soughan, responsible for designing strategies to address cyber risk.

"As soon as an incident results in significant loss of life or significant property damage, people tend to pay attention."—John Soughan

He added, "Manufacturing, industrial control systems, energy systems—all of those things that, when the fail have a greater likelihood for bodily injury and/or property damage—will drive up the probability of software liability."

Keep learning

Read more articles about: SecurityApplication Security