Retro van

Doing DevOps with legacy IT: Driving change from the Ops side

Everyone is talking about DevOps these days, as well they should. The speed and flexibility achieved by streamlining development and operations into a combined capability is attractive to both IT and business leaders.  The combination of automation, agile and lean processes, and the move to a micro-services architecture speeds the deployment of the applications and services that enterprises need to be competitive.

But what about traditional businesses, and the legacy applications that in those organizations represent most of the workloads — and the bulk of the revenue?  They were designed with an emphasis on “industrial grade” reliability, security and scalability. This required a methodical, slow march towards deployment.

But when organizations need to respond quickly to new digital business needs, this traditional approach is too cumbersome.  Developers and testers wait for operations to provision environments for them. Every stakeholder lacks visibility into the efficiency and quality of the work being done. The result: Late and poor quality releases that boost costs, while failing to meet business needs.

It’s far too expensive and risky to rewrite legacy applications along newer architectural lines. But when organizations need to adapt their legacy portfolio to accommodate digital business needs, IT operations teams can play a key role in driving greater agility and innovation velocity. 

Continuous testing: A practical guide

I refer to this approach as Ops for Dev or “OpsDev,” which delivers core IT capabilities through an automated, self-service experience, and orchestrates the delivery processes while increasing visibility across the entire IT value chain.  This enables IT to function in much the same way as a cloud service provider, generating speed and flexibility when needed, while also providing industrial-grade controls for traditional applications. Here’s how we at HPE have used it to drive increased competitiveness for our customers.

OpsDev lever one: Increased automation

Even without refactoring your applications, you can capture enormous benefits by automating as many IT processes as possible using modern monitoring, deployment and collaboration tools. This streamlines IT delivery, ensuring repeatability and speed at scale. 

Automating processes optimizes cost by reducing the dependence on human expertise and labor.  As the maturity and scope of automation increases, the emphasis shifts to orchestration; collating automation into broader capabilities, such as runbooks, or embedding security code scanning into the path to production for applications without slowing the engineering pipeline.

Orchestrating capabilities increases the agility of the organization by minimizing human intervention and automating the interaction dependencies between development, security and operations processes. 

Most organizations are adept at automating their processes, using commercial IT management solutions and frameworks such as ITIL to guide the approach.  Orchestration becomes problematic at enterprise scale because of the breadth and complexity of the multi-sourced technology landscape. Until now, there wasn’t a framework to guide orchestration.  

So HPE joined forces with several enterprise customers to create IT4IT, a new standard for IT management.  The combination of IT4IT and ITIL provide a good procedural and functional context to help organizations advance toward  OpsDev.

Bottom line:  Lever one emphasizes using automation to reduce cost and introduces orchestration to increase IT agility and efficiency at scale.

OpsDev lever two: Reduce latency

Lever one provides the foundation for the second lever, which drives down latency through self-service consumption of services and orchestration of delivery.  This accelerates innovation because developers no longer need to assemble the tools and components they need, or to wait for IT to manually provision environments for them.

They can instead access everything from development and test environments to an engineering workbench complete with approved APIs, security services and integrated toolchains. The combination of self-service and orchestration enables continuous integration, testing and release, making it easier for developers to use new methodologies and architecture provided as micro services.

In addition, reducing latency also means injecting lean thinking and agile methods into processes, eliminating wasteful steps wherever possible, abstracting complexity, and ensuring that each step generates the output (data) required by the next link in the value chain. The goal is streamlining through self-service and orchestration to move towards continuous everything — improvement, delivery, testing and assessment — to reduce build integration and implementation times.

Bottom line:  Lever two builds on lever one to speed IT and business innovation through self-service and orchestration.

OpsDev lever three: Visibility

As the old saying goes, you can’t manage what you can’t measure. That’s why the third crucial ingredient in OpsDev is the instrumentation of your planning, engineering and test work spaces, as well as production environments, making them visible to both development and operations so they can be monitored and managed.

Increased visibility allows you to measure the quality and speed of every process in the IT value chain, from inception and development through testing and on to release and deployment. Continuous data flows build in constant feedback loops that improve visibility in everything from planning to the efficiency of your development lifecycle. It helps you find problems before you release code and work toward building in machine learning, thus increasing quality and customer satisfaction while reducing the cost of troubleshooting and maintenance.

At scale, this instrumentation creates information that becomes a “big data” challenge for IT.  Thus, lever three implements the necessary automation, integration, ingestion and warehousing to capture data and turn it into meaningful information.

Once the instrumentation matures, visibility to insight and metrics can be offered as services that developers, operators or business managers can subscribe to as needed through the self-service experience.

Bottom line:  Visibility creates higher levels of cooperation and trust among business, developers and operations teams than ever before. This provides a foundation for accelerated decision making and empowers teams to embrace both DevOps and OpsDev approaches. 

OpsDev: Extending DevOps benefits to legacy apps

In greenfield companies, DevOps is a commonplace approach.  To ensure profitability and speed time to market, application developers wear multiple hats, and a culture of trust and collaboration exists between organizations.  Here, the bulk of IT capabilities come from small IT departments, and third-party services.

But for established large enterprises, embracing DevOps is a complex undertaking.  Developers interact with or can be part of an IT organization with thousands of employees spread across hundreds of geographically dispersed teams.  The culture in these organizations is often rooted in command and control, and risk avoidance, which are counterintuitive to DevOps.  This requires organizations to rethink what DevOps actually means, and the transformational journey to get there.

OpsDev is not a pie in the sky idea. It is instead a simple codification of what IT needs to focus on in order to transform and participate with the business in the journey to digital.  The result is an IT organization that can operate in multiple modes — industrial grade, fast and flexible.

The key is to apply these three levers in a consistent and efficient way in the context of both traditional and modern applications.

Continuous testing: A practical guide
Topics: DevOpsIT Ops