Network virtualization and delivering on the promise of DevOps
Virtualization is hugely important for DevOps because it lets dev and IT teams use identical compute environments. But the network environments have always been different. This limitation greatly complicates the hand off between development and production.
What if we virtualize network hardware, such as routers and switches, creating a replica of an existing network? Can we streamline application delivery, and get dev and IT on the same page?
What is network virtualization?
By "network," I don't mean the physical wire. While virtually all networks today are driven by some form of Ethernet, it's useful to think of the network stack using the Open Systems Interconnection (OSI) model. OSI delineates network function and operation in a series of layers, from layers one through seven, as defined below.
- Layer 1: Physical Layer. Transmits raw, undefined bits over a physical medium, such as a cable.
- Layer 2: Data Link Layer. Transmits data frames over a physical medium.
- Layer 3: Network Layer. Manages addressing, routing, and traffic on a network.
- Layer 4: Transport Layer. Transmits data segments between network ports.
- Layer 5: Session Layer. Manages communication sessions between two points on the network.
- Layer 6: Presentation Layer. Translates data between a networking service and an application.
- Layer 7: Application Layer. Implements high-level services such as directory services and resource sharing.
What does this layered organization of services do for a network? First, it makes the physical connection merely a conduit, serving as a link for multiple virtual networks. It simply forwards packets from one location to another.
Second, because locations, application data, and other information regarding packet status are contained in the packets themselves, they operate as layers of software, above the physical network. That means they can be virtualized—and they have been. Network virtualization works in layers two through seven—everything but the wire.
What good is virtualizing the software layers of the network? Well, first, there are traditional IT advantages, such as the automation of network policy, features, and services. Each application, on each virtual machine (VM), has its own network layers two through seven. Network features such as open ports, firewall policy, and packet forwarding are automatically applied with the application, so that setting up network parameters becomes automated.
When we talk about DevOps and other ways to accelerate deployment, things start to get really interesting.
Thinking about testing
If we want to accelerate application development, network virtualization can provide a seamless interface to testing. There's no need for separate testing builds, for example. With most development teams coding and building using virtualized servers, those server VMs can make excellent testing environments.
But not if the network isn't configured as required for actual production use. To faithfully reproduce the user configuration, testers have to get information about the production network and manually configure the application each time.
Worse, often that level of network information simply isn't readily available. Testers may guess, not worry about network policies at all, or just keep as many ports open as needed. While most do at least some security testing, security settings are most often relegated to IT.
Network virtualization offers the opportunity to mimic real-world network hardware and system software. In effect, it emulates connections between applications, services, dependencies, and end users for the purpose of testing under as realistic conditions as possible. It provides a standard network configuration throughout the application development lifecycle—developers and testers use the same network policies, definitions, and even equipment as production, making it possible to test under real-world conditions.
Network virtualization also has an impact on performance. Without knowing the behavior of the network, it's difficult to accurately assess how fast a typical multi-tiered application will run.
Back to DevOps
Today, to deploy an application, developers typically package it up into an executable or compressed executable and send it to an IT repository for extraction and installation. That may or may not be a seamless process.
The stack isn't as easy to package up. Depending on the stack components, it could be highly complex to bundle up as part of the executable. Alternatively, the stack components could be listed as system requirements, or agreed upon as a business policy. However, it's not uncommon for development to use the latest third-party components, at least to provide bug fixes.
IT takes the application and stack, builds one or more VMs, installs the components on one or more servers (usually multiple servers, separating presentation, business logic, and data, at the very least). Once built and running on servers, their networks have to be manually provisioned and separately tested.
Alternatively, it's possible to deliver one or more VMs to production, complete with application, stack, and OS. The VM likely runs on a different type of server in the two environments, as well as different networks.
What's missing in this transfer is that network information. Specifically, what policies, restrictions, or exceptions have to be applied in order for the application to work properly and not be a security risk to the organization. This is important because development teams usually don't use the same settings as production networks.
Suffice it to say that deployment of modern applications, even many packaged commercial applications, can become complex and error-prone. It may take days or even weeks for a released application to make it into production, and it's likely still being tweaked for weeks beyond that.
An ounce of virtualization
Now, let's say that both servers and network are virtualized. They might be in the cloud, but the infrastructure can also be in an internal data center. One or more VMs are designated the development environment, where the dev team builds and tests the application.
Because the network has been virtualized as a part of development, it's a simple process to move fully tested and ready VMs to production. All it requires is copying the VMs and sending them off to IT in a shared storage area or even just transferring the rights to existing VMs directly to IT.
In return, the IT team can provide the development and testing team with relevant network policy information, enabling it to develop with the same network, even if it's virtual, as it will be deployed.
When development and testing are complete, the VM, with virtualized network settings, is simply transferred to IT in order to be plugged into the real network. That could mean putting the application on a server on the production network, where it should work without any further provisioning.
But this shouldn't have to mean resetting network parameters in the VM or testing the application again in the production setting. If it's done correctly, the transition from development to production occurs with the flick of a switch, so to speak.
Knowing your options: From shift-left to production
Network virtualization software is available commercially and as open source. Some products virtualize entire networks, some just the stack, and others just hardware devices. If you're looking for something to accelerate and "shift-left" testing, many of these solutions are feasible. Full network virtualization will ease the transition from dev to production.
Network virtualization is part of a larger category known as service virtualization, which provides a way to capture and use essential characteristics of application components that are still in flux or haven't been developed or procured yet.
In the past, teams had to stub out or mock up components or services in order to test applications under development. Those approaches serve to isolate the application under test from dependent components and services but don't model the realities of slow performance, bad data, or disconnected networks.
Does network virtualization fulfill the promise of DevOps?
Emerging DevOps practices have brought application development and IT closer together in terms of working to deliver, monitor, and fix applications within production. DevOps done in the spirit of teamwork means that nothing is held back among developers and IT personnel.
A seamless hand off from dev to production allows DevOps to blend separate functions into a shared operation. Ideally, the teams will come to think and work as one team; it's all about using computers and software to deliver business value.
Network virtualization takes us a bit closer to that ideal, but there's more work to be done. And it's more than just flipping a switch—it's a combination of technical, organizational, and cultural changes that put these two groups on the same team, using the same software for the same purpose. Getting there is going to require more than network virtualization, but it's a necessary part of the journey.