Micro Focus is now part of OpenText. Learn more >

You are here

You are here

From converged to composable: A guide to the new infrastructure jargon

public://pictures/Robert-Scheier-Principal-Bob-Scheier-Associates.png
Robert L. Scheier Principal, Bob Scheier Associates
 

If software is “eating the world,” as web pioneer Marc Andreessen famously said, its latest snack is IT infrastructure.

Almost every IT vendor is promising a world where virtualized compute, storage, and network hardware are defined as software, much as virtualized servers today consist of software running on physical servers. The ability to perform the configuration and management of such resources through scripts, rather than manual effort, greatly reduces management costs and makes better use of expensive equipment. The consistency that comes with scripted configuration and management should, in turn, increase an organization's security and compliance by ensuring that every system confirms to a single, approved image.

Quickly spinning up and configuring infrastructure also makes it easier to implement DevOps (the melding of development and operations to speed applications to market) by accelerating the creation of all the compute, storage, and networking resources needed for development, test, and production systems.

“The old way of doing things was very much centered around specific pieces of hardware and huge capital expenditures in cabling, rack space, data centers, and cooling.” says Ari Weil, vice president of marketing and business development at application performance acceleration vendor Yottaa. Customers today “instead want flexible computing resources that allow them to scale up to meet peak computing needs without overspending on hardware and software.”

Most of these software-defined approaches rely heavily on application programming interfaces (APIs), and in some cases high-speed networks, to communicate among the underlying components. 

Vendors describe this vision with a slew of overlapping and even contradictory terms. While even the experts don’t agree on terminology, here’s a rough guide to some of the most popular buzzwords with tips on which of these approaches might work best for you.

Converged infrastructure

Converged infrastructure can be thought of as “traditional enterprise systems, jammed together, with a unified management layer atop it,” says John Abbott, distinguished analyst at 451 Research. “It involves only one SKU [stock keeping unit] to buy, and is easier to maintain over the lifecycle because all the elements are controlled and can be upgraded in sync without breaking the system.”

Also called “integrated infrastructure,” converged infrastructure is very attractive to those customers “who want to keep their long-term systems current and predictable” without needing to retest them over time, he says.  

Converged infrastructure helps only with “the physical cost of acquiring and configuring hardware,” says  Bryan Che, general manager of cloud product strategy at open-source software vendor Red Hat. The customer must still purchase and configure the software required to perform the configuration. The customer gets “some ease of use and flexibility in exchange for more ability to control the software component,” he says. As such, it is a better fit for larger companies with a great need and the skilled staff able to choose and configure software themselves.

“If you’re smaller and want easily deployable blocks and have a lot of money, converged infrastructure … can be pretty good” because it reduces lifecycle management costs and simplifies procurement, says Abbott. It also allows companies to easily add capacity as needed, says Kelly Murphy, co-founder and chief technology officer of Gridstore, a hyperconverged software storage vendor. However, he says, implementing converged infrastructure tends to carry a higher up-front capital cost than stand-alone compute, storage, and networking gear. This requires customers to determine whether the long-term savings will outweigh the higher initial cost.

Vendors providing such solutions include Dell, EMC Hewlett Packard Enterprise, NetApp, VCE, and VSPEX.

Hyperconverged infrastructure

Hyperconverged infrastructure is an option that also combines compute, storage, and networking hardware resources in one physical unit, with a software control layer that allows the pooling of resources and control of infrastructure through software. It is often used to describe a system that eliminates the need for a separate storage area network. Unlike converged systems, this approach uses commodity hardware and implements more management functions through the control software.   

 “If you want to add quick resources, a modular, hyperconverged infrastructure can do it,” says Abbott, by offering “appliances you buy as you need” without much effort required to scale up the systems. It is also often good for discrete application areas.

This is an attractive option “especially if you’re not a huge firm” says Che. He warns, however, that development of such systems is still relatively immature.  

There is a potential for vendor lock-in, but most vendor offerings can be linked to those from competitors, says Abbott. Another concern is that hyperconverged systems could create another silo of systems to manage, which can increase management overhead, he says.

Another downside is that customers must rebuild services such as snapshots, backup, recovery, and deduplication from scratch. Depending on the implementation, a hyperconverged deployment can require as many as three management interfaces, say Dave Kawula, founder of TriCon Elite Consulting, a virtualization consulting firm.

Customers wishing to rely on a single vendor for hyperconverged resources can get greater simplicity. Those with an all-Microsoft environment, for example, can achieve a “single pane of glass” management of their compute, storage, and network resources with Microsoft System Center Virtual Machine Manager, says Kawula.

Hyperconcerged infrastructure has "been out for several years and is easy to implement,” says Abbott, but most of its use has been “for things like branch office or some sort of fairly discrete application area … [to] see if it really does what it says it does and works in a reasonable way.”

He is also concerned that as hyperconverged infrastructures are used for more challenging applications, “it becomes more complicated,” falling further from the ease of use that is a prime argument for such software-defined approaches.

Vendors offering such solutions include Gridstore, HPE, Nimboxx, Nutanix, Pivot3, Scale Computing, and SimpliVity.

Software-defined infrastructure

Rather than a set of definable products, software-defined infrastructure—along with infrastructure as code, which is discussed in the next section—is typically used to describe the benefits delivered, in various degrees, by software-defined approaches. IBM, for example, defines it as the transformation of “a static IT infrastructure into a dynamic resource, workload and data-aware environment,” with application workloads “serviced automatically by the most appropriate resource running locally, in the cloud, or in a hybrid cloud environment.”

HPE takes a slightly different approach, citing its use of a unified control layer that can be programmed with business rules and that allows IT administrators, applications, or business users to control the underlying IT resources.

Infrastructure as code

David Linthicum, senior vice president at Cloud Technology Partners, differentiates infrastructure as code (IaC) from other approaches in that the applications themselves, rather than an external software layer, have the intelligence to recognize and dynamically configure the infrastructure elements on which they run, including auto scaling and auto configuration. This is only being done in “pockets of innovation” within mainstream customers, he says, but is an essential part of the highly scalable, low-cost platforms provided by cloud service providers.  

Che argues that IaC specifically refers to the ability of programmers and system administrators to  automate specific resources or applications without the need for a full software-defined infrastructure, as “some folks still find value in automating the non-virtualized infrastructure.”

“In the long term, everyone is moving to a software-defined infrastructure and infrastructure as code,” says Che. “But the more immediate win and value...” comes from automating specific resources rather than creating a brand new infrastructure.  

Composable infrastructure

Composable infrastructure is used more to “to describe hardware architectures” than different software approaches and is offered by vendors such as Cisco, Dell, and HPE with “modular system resources you compose in real time,” says 451 Research's Abbott.

It relies on an orchestration or composition layer outside of the application to abstract the physical infrastructure and configure it for the applications, says Linthicum. 

"With a composable infrastructure, the creation of resources such as load balancers or database servers can be automatically created when traffic, loads, or application latency hits certain levels,” says Yottaa's Weil. “You can take a series of disparate or integrated components and [assemble] them into a consistent architecture, whether physical, virtual, or hybrid."

One of the benefits of composable infrastructure over other software-defined approaches is that it allows the sharing of resources, such as system memory, among the virtual machines to meet varying application loads, says Murphy.  

Keep the end in mind

But all of these new software-defined approaches might be relegated to niche status by the public cloud services that use the same software-defined approach on a massive scale to deliver radically lower costs and greater scalability than any on-site data center, says Linthicum. Customers might use such internal software-defined approaches to rehost internally only those applications that can’t be cost-effectively moved to the cloud.

Whatever approach, or mix of approaches, you use, judge each vendor’s offering by how well it delivers the benefits you need most: reduced costs, faster deployment time, improved security and compliance, or a fast path to implementing DevOps.

“With virtualization as a stepping-off point, I would do a pilot project on something like software-defined storage … in a discrete application area and start learning from there,” recommends Abbott. “We’re still in the very early days, so you need to find something that fits your mode of operation, and not bite off the entire thing. We’re still some years away from the fully software-defined data center.” 

Image credit: Flickr

Keep learning

Read more articles about: Enterprise ITData Centers