Micro Focus is now part of OpenText. Learn more >

You are here

You are here

How service virtualization cuts developer wait times

public://pictures/ericka_c_0.jpg
Ericka Chickowski Freelance writer
Service virtualization reduces wait times by simulating and modeling dependencies for developers, QA, and ops teams. Here's when and where you should use it.
 

Enterprises are developing an insatiable appetite for continuous delivery, even as the complexity and interdependence of enterprise software continues to increase. That's the paradox DevOps teams face. In this environment, work in progress (WIP) bottlenecks that leave one set of developers, testers, or operations staff waiting for other teams to complete their work are the kiss of death. That's where service virtualization can help.

With DevOps, teams endeavor to collaborate better than ever. They're performing more work in parallel and streamlining manual tasks through automation. But even as they work together, many teams spend too much time waiting for others to give them access to a dependent component or system so they can get their development or test work done. The reasons for this vary. The service might be incomplete, constrained by a third party, or too expensive or sensitive to be opened up beyond a limited time frame. Regardless, this hurry-up-and-wait idle time kills any kind of DevOps or agile goals to increase the speed of innovation.

This is a primary reason why organizations are adopting service virtualization, a tool that lets you simulate and model dependencies for developers and quality assurance (QA) teams. Your teams can use service virtualization to represent realistic behavior for a given service when it's not available. They can also make changes to the state of the model in order to create different behaviors based on an infinite number of variables.

"Service virtualization use is the difference between having collaboration throughout the software lifecycle versus just talking about it," says Theresa Lanowitz, founder of voke, inc. "It's a technology that removes barriers across teams, the software supply chain, and the organization."

While service virtualization has had time to develop over the course of a decade of existence, its use is still not pervasive within the enterprise. A survey by voke earlier this year found that 57 percent of organizations were using service virtualization. Among those, just 19 percent were using it enterprise-wide, and only 16 percent extended that usage to third-party, offshore teams or their software supply chain.

Conforming to the way engineers work

According to voke, 81 percent of organizations experience development delays associated with waiting for a dependency to develop software or to reproduce or fix a defect. Eighty-four percent of those same respondents said they experience QA delays waiting for a dependency to begin testing, start a new test cycle, test a required platform, or verify a defect.

"If you asked a typical development or test team, 'What do you spend most of your time doing?' they'll tell you it's not writing code, it's not testing code, it's waiting," says service virtualization pioneer John Michelsen, author of Service Virtualization: Reality Is Overrated, and chief product officer at enterprise mobile security software vendor Zimperium. "It's waiting on environments to get built, it's waiting on other teams to get done with their stuff, it's waiting for our configuration teams to be made, it's waiting for data in downstream systems to be populated so they can execute certain functions," he says.

Todd DeCapua, innovation coach for the HP software innovation team and senior manager of the network virtualization and service virtualization at HP, says an HP survey of clients and other professionals shows that the average time developers and testers spend waiting for environments to be available adds up to 35 percent of their work life, on average. In one recent interaction with an enterprise, DeCapua's team sat down with the CIO to do the math, tallying up wasted time based on actual salaries.

"We very quickly agreed with the CIO that this is about an $18 million conversation for him," he says.

It is, DeCapua says, a serious cross-functional problem ailing enterprise IT.

According to Michelsen, the engineers of the physical world figured out long ago that waiting to build one piece of some large-scale physical object until another piece is built is a sure way to keep projects in limbo forever.

"If we were Boeing and designing an airplane, we wouldn't wait to build a wing until we got the rest of the plane built in order to stick it on the plane and see if the plane would fly. That's absurd. You design the wing independent of the plane in a wind tunnel," he explains. "What those other engineering disciplines figured out is this notion of modeling and simulation. I don't need the real thing to connect the wing to. I need a wind tunnel that I can control to make sure that component works."

Service virtualization in the wild

Because service virtualization has had time to mature, it has developed a considerable roster of proven case studies. They show that organizations that put service virtualization simulations in place see, on average, a 40 percent increase in the speed of development, according to voke.

Before his time at HP, DeCapua led an early service virtualization implementation at financial services firm ING DIRECT, while his team was developing a loan-origination system in 2007. At the time, his team was trying to accomplish a first for the industry, including real-time scoring and real-time offers directly within the application.

"When we released that product at first, it failed 100 percent of the time," he says. The system made over 150 services and infrastructure calls, and those dependencies caused a ton of headaches. One, for example, involved a single call to a service provider that handled the back-end credit scoring.

"You'd send that company a few key bits of data and what they'd send back to you was scoring across all three credit bureaus and an aggregated credit score," DeCapua says. In that first iteration, a developer had built in a five-second time-out, guessing that would be enough time for the call to come back from the provider.

It wasn't. Service virtualization helped his team test better for such dependency problems while still maintaining enough speed to stay ahead of the competition.

At DevOps and agile consultancy Clutch, CEO Steve Anderson says he uses service virtualization to help enterprise customers reduce their QA footprint. "It's this idea of not having to have a release manager or a peer DevOps manager creating these micro-environment circumstances," he says. "You can instead create the service virtualization environments that are going to mimic the calls going back and forth. And you just treat them as another variation of systematic elements and microservices."

He considers service virtualization an underlying component of working effectively with a microservices architecture.

"If you're looking at a distributed architecture when it comes to a microservices framework, you've got to figure out how to zip out just this one area or module of your application without having to test it against all this other stuff in a production environment," he explains. "Service virtualization is a huge enabler for microservices."

In researching his book, Michelsen came across many examples of service virtualization in action. For example, Bank of America used it to avoid scaling massive mainframes just to test front-end web clients. "They'd have these $10 million mainframes doing nothing but running test scenarios," he says. "They used virtual services to run dozens of tests at a time, because with virtual services you can run 100 of them on a single server."

FedEx used service virtualization to keep up with customer demands for speed of innovation and reliability of applications. "With the introduction of service virtualization technologies, we're now able to stand up hundreds of interfaces and virtual back ends without requiring the complex interdependencies of the peripheral systems," Russ Wheaton, VP of enterprise application architecture at FedEx, says in Michelsen's book. "In one example from my team, we simulate 25 back-end services representing about 2,000 different servers in a space we traditionally struggled with. Interestingly enough, we didn't even test those services. We just needed them to test the higher-end dependent service. Taking away the need to work with real systems has greatly simplified our process."

When to use it

So how does an organization get started with service virtualization, and how do you decide where it makes sense and where it doesn't? Systems that are highly modularized, high-traffic, high-load, and high-scale are ones to target, especially if they have connection points or involve legacy systems, Anderson says, because those are the systems that tend to have the most constraints.

"But if you're coming at it from a perspective where you have a fairly small number of modules inside an architecture, where you have no dependency on legacy systems, I would also take a hard look at it, because in some ways you could actually spin up a copy of your production environment more cost-effectively than it would take you to use service virtualization," he says.

Chris Ciborowski, founder and partner at DevOps systems integrator Nebulaworks, has been working with several organizations to use containerization in lieu of service virtualization. For example, one of his customers is building a microservices-based application and wants to consume services across the development team in a consistent manner.

"One customer is using Docker's service platform, and they have broken down each one of the key functionalities of the application into individual, discrete components that a small team is working on," he says. Before, they would have virtualized services in order to represent those functions. "Today their developers can use a composed file and pull down and stitch together just the pieces of the application platform that are required in order for them to do their development, and then for the unit test to be done as well."

However, Michelsen warns that containerization is no panacea. "They're still live systems [and] they still behave like live systems. You still have to manipulate them, so you really wouldn't say 'Oh, wait, we've containerized this out, therefore we don't need to build virtual services,' " he says. "Even the containerized service is being built by someone else. It's got data dependencies; it has lower performance than production or difference performance."

When thinking of service virtualization, he adds, teams should consider things that are either in scope or out of scope for the piece for which they are responsible. "What's out of scope is everything that I talk to that my team doesn't build," he says. "Whatever those are written in, whether they're containerized or use virtual machines, they're constraints," and they represent most of the time that developers spend waiting. "They still wait, even if it's containerized, because the dev team still isn't done, or that dev team hasn't put proper data scenarios in place."

Service virtualization and containerization aren'tt an either/or choice, just as most tools in the continuous delivery pipeline are frequently not either/or. You need to think of what's in scope and use every possible trick you can. "Containerization is great, the automated build is great, continuous integration is great," Michelsen says. "You do all of that for what's in scope and then you use virtual services for everything that's out of scope." In this way, you can immediately ramp environments up and down.

Keep learning

Read more articles about: App Dev & TestingDevOps