You are here

Is it possible for non-web companies to reap benefits from continuous delivery, or is this just a pipe dream? This article discusses how one non-web-based company has scaled continuous delivery to meet its needs.

Is continuous delivery only for web companies?

public://pictures/Christine-Parizo-Principal-Christine-Parizo-Communications.jpg
Christine Parizo, Principal, Christine Parizo Communications

Who stands to gain most from adopting a continuous delivery capability? Web companies such as Google and Facebook are well-known for their use of continuous delivery to provide users with the latest features of their products as fast as possible. But the need to push out features with minimal manual intervention is becoming increasingly important to non-web companies as well, with continuous delivery gaining ground in environments ranging from data centers to manufacturing. Companies are pushing developers to speed up the traditionally slow update process, adding fixes and improvements faster than ever before to gain a competitive edge. Experts agree that continuous delivery can work for non-web companies, but they warn against doing too much, too soon.

Continuous delivery is being hailed as the next evolution in agile software development—the catalyst of the movement, according to Jeffrey Palermo, CEO of software development firm Clear Measure. The Agile Manifesto changed the way software projects were planned by breaking them into smaller chunks. Just a few years later, continuous integration furthered the case for dividing development into small batches. Palermo expects continuous delivery to be as well-documented and formulated in 10 years as continuous integration is now.

[ Learn how value stream mapping can improve your DevOps workflow. Download this GigaOm Research Byte report today. ]

It's not just web companies that benefit

While the benefits of continuous delivery for web companies are well-known, any company can benefit, as long as they recognize that continuous delivery has its limits, according to Palermo. "The modern Internet companies have demonstrated that it can be taken to the extreme," he says, noting that developers will write code and have the changes in production within a few hours. Some Internet companies suggest that 30 upgrades a day is the norm.

"Continuous delivery does not imply that it must be implemented as an extreme," Palermo says. What it means is that, when a feature or upgrade is ready, it's deployed immediately, rather than being tied to an arbitrary schedule such as a quarterly upgrade. It's taking hold in Windows 10, the last planned release from Microsoft. From then on, Microsoft will use continuous delivery to provide upgrades, he notes.

Companies ranging from financial services to healthcare have also started using continuous delivery. The idea is that they can plan, develop, and test a single feature, says Palermo. Then, when it's ready, the new code is pushed into production. "If your software runs on Linux or Windows servers and is perpetually connected to the Internet, there's a low boundary to frequent deployment," he says. "The premise is to deploy when it makes sense."

One thing to note: continuous delivery doesn't make sense for software running on devices that aren't persistently connected to the Internet, like gas pumps or cars. But if the software runs on commodity servers that are connected frequently enough throughout the day to absorb changes, it's still feasible, Palermo says. An iPhone isn't guaranteed to be connected to the Internet, for example, but it's online frequently enough that applications can be automatically updated without user intervention.

The biggest thing to watch out for when moving to continuous development, according to Palermo, is the mindset that comes with the switch. "For some companies, there will be organizational changes to make," he says. But for the most part, continuous delivery will only change the pipeline between continuous integration and the production environment going forward.

[ Learn what separates successful DevOps initiatives from unsuccessful ones in this new August 27 EMA research webinar. ]

Data centers gain ease of updates, better visibility

One type of organization that definitely benefits from continuous delivery is the data center. For CenturyLink, moving to a continuous delivery model helped them make changes to deployment scripts and machine configurations as seamlessly as possible, with as little manual intervention as necessary, according to Matt Wrock, senior software engineer at Chef and former principal software engineer at CenturyLink Cloud. The company was also looking for a better way to troubleshoot. "If something goes wrong, I don't have to ask my colleague, 'Did you do this command or click this thing?' " he says. Instead, all that information is in the script and can be debugged.

CenturyLink hasn't changed its mechanism of deployment with continuous delivery, but what is being deployed is different. "One thing to be aware of is that we're not doing a deployment on every code change," Wrock says. "The pipeline doesn't flow to production continuously." This is because a daily or more frequent deployment would just create unnecessary overhead when there isn't anything to deploy, he adds.

The pipeline begins by pushing a code change into source control using GitHub. CenturyLink's continuous integration server, which uses the continuous integration tool TeamCity, picks up the change. Because CenturyLink uses Chef-based infrastructure code, the Foodcritic tool conducts the lower-level unit tests on the Chef cookbook. If it passes, the cookbook is submitted into the QA Chef server, in the test environment. An entire dependency tree is drawn on that cookbook, and CenturyLink runs kitchen tests that will spin up a virtual machine or container that runs through the entire server configuration governed by the cookbook change. Once that's complete, the cookbook versions are promoted into the real QA environment, where people are working off the infrastructure these cookbooks are creating, Wrock says.

This is where problems are spotted before the new code is deployed in production, according to Wrock. Before a production deployment, several QA data centers are put in the same state as the data center where the new code is sitting, which will show the team what will happen if the new code goes into production. Usually the code will run smoothly, but nuances in data centers can sometimes cause snags, he says. But once the QA deployment is finished, the code is pushed to production.

"In our case, QA and production are totally separate," Wrock says. "No QA server can talk to a production server." Some individuals have keys on a machine that grant access to both of those domains, and they will be the ones to run the command to push new cookbooks from QA into production, but not all at once. A canary data center (think of the canary in the coal mine) is the first to receive the new code so that, if something goes wrong in the canary, the external customers don't suffer, Wrock says. "The idea is that all of our data centers and infrastructure code are in the same state, so if something goes wrong in this one, something will go wrong in another."

Continuously competitive

Ultimately, it depends on the business and its goals. But for non-web companies, it's worth taking a page from Google's playbook and considering continuous delivery to speed upgrades and feature releases. That doesn't mean new code every hour; it means releasing new features or fixes at a rate that the business can handle. In turn, companies can gain a competitive edge by having new software in production, whether it's being pushed out to internal or external customers.

[ Get Report: The Journey to Enterprise‐Scale DevOps: Becoming DevOps Determined ]