You are here

You are here

Hyperconverged infrastructure: Some workloads work better than others

public://pictures/ldp.jpg
Linda Dailey Paulson Freelance writer
 

You need to know the performance characteristics of your workload to get the most out of any given computing architecture, but what workloads are the best fit when it comes to hyperconverged infrastructure?

For the purposes of this discussion, the workload is the amount of processing that the computer system must complete. This can include an application as well as the user sessions that may be interacting with the application.

Factors such as latency, storage replication, virtual machines, and input/output operations per second (IOPS) are all key to determining whether a given workload is ideal for such an infrastructure migration.

But what specific types of workloads are these? Opinions differ radically, but the trends are clear.

 

Hyperconverged workloads: Moving beyond virtual desktops

Some experts contend that workloads capable of scaling linearly are best suited for the move to hyperconverged infrastructure, while others assert that the technology has matured to the point that it’s easier to think about what not to run in these environments.

Workloads for which hyperconverged infrastructures are traditionally used include virtual desktop infrastructures (VDI)—the original and ideal use for the technology—and virtualization for remote offices, says Said Syed, group manager for hyperconverged product management at Hewlett Packard Enterprise.

Other common uses include virtual machine deployments for specific lines of business within an enterprise, such as for application testing and development, which enables configurations to be changed on the fly, and deployment of private cloud resources.

“Hyperconverged infrastructure was targeted as general purpose when it came out, but virtual desktops were the sweet spot,” says Jeff Kato, senior analyst and consultant with the Taneja Group. Users had been overprovisioning their enterprise hardware, creating performance bottlenecks. “Hyperconverged solved that. It was a nice use case for hyperconverged. Since, it’s gotten more sophisticated.”

Hyperconverged technology has advanced

Hyperconverged systems have changed radically in the last 12 months, making workload characteristics increasingly less of an issue, says Richard Fichera, vice president and principal analyst for infrastructure and operations at Forrester Research. Vendors have added more types of workloads, and VDI is now “a minority part of sales,” he says.

Users can run more general-purpose virtual machine workloads on hyperconverged systems. This includes enterprise and database applications and even some extremely specialized applications, including video surveillance systems and big data archives.

However, Syed says real-time applications with high IOPS and low latency—namely HANA databases and big data Hadoop, Cloudera, or production workloads—“don’t make sense.”  Some of the “yellow zone” applications, or workloads, include OLTP databases, surveillance/security applications, and other systems responsible for capturing large objects or amounts of data in real time. Storage-centric applications or settings where regulatory compliance is paramount may also require a second thought.

Most large enterprises are succeeding in using hyperconverged infrastructure for development, testing and quality assurance, as well as for VDI, says Jerod Powell, founder and CEO of INFINIT Consulting, Inc. “It depends on the company. The workloads that are well suited are linear.”

This means pausing if the application uses memory, CPU, or other resources more greedily than others. This could include situations in which a disc runs out of I/O before CPU or memory. “It amplifies the problem if the hyperconverged infrastructure is not running the proper workload,” says Powell.

Here, the problem is that running out of resources means doubling the cost compared to performance.

However, Fichera says, the technology has matured. “These systems can run almost anything except latency-sensitive workloads needing a millisecond or better latency out of the storage,” he says.

Make your own assessment

Workload characterization requires looking carefully at the details of your current workload to determine whether it is suited for the transition to a hyperconverged infrastructure. Go beyond applications to examine acceptable SLAs, raw I/O, and virtual machine benchmarks.

Powell suggests examining how a given workload scales. If it does not scale linearly, then you may end up with unused resources on your expensive hyperconverged device. "You can end up spending more because of those unused resources,” he says.

Replication may be an issue as well. Having three replicas, for example, may not be a problem in a small environment, but if you're storing petabytes worth of data three times, then it becomes an issue.

Syed says one important metric is mean time between failures (MTBF). A higher number, he says, means the hardware is more resilient. Users need to ask for performance benchmarks and metrics to better inform their decision, he says. “I see hyperconverged customers getting drowned by marketing fluff; then, as they apply it and as they scale, they begin to run into issues.” One such issue is component failure and lack of resilience, which can hit hard when hardware does fail.

But Fichera disagrees. He contends that measuring MTBF doesn’t have much correlation in real life. "It’s a useless number. Most of the time when a system stops working, it’s a software problem.”

There are, he adds, “few absolutes” in terms of what workloads are best suited to hyperconverged environments. Most block-based workloads and anything able to run on a standard cluster should be well suited to such a move, but think twice about transactional workloads and those requiring less than a millisecond of latency.

Accentuate the positive

There are many reasons to adopt hyperconverged infrastructure, not the least of which is the significant reduction in operating expenditures that it can bring, says Fichera. You need fewer employees to attend to the same number of VMs, and you'll still retain the flexibility to scale readily. And storage administration is also less expensive.

This is particularly important in remote office locations, where qualified staff may be unavailable, says Powell.

How much savings is possible? Fichera says he has heard from enterprises that had experienced cost reductions ranging from 20% to 100%. The actual savings will vary depending on variables such as how well managed your previous environment was. 

If you’re considering a move to a hyperconverged infrastructure, do your homework, Fichera says. “The general level of systems is very good, so it would be hard to make a bad decision.” IT professionals should examine the nuances of the enterprise to determine which factors and variables are most important. This might include workload characteristics, storage components, and the operation itself. “Know thyself. Know thy workload. Know the details of the systems you are considering,” he says.

If, for example, you have a system in which the servers or database needs to be tuned regularly by IT experts, you may not want to make a move. 

"Hyperconverged is not about a specific application or workload use case, but about installing the efficiency of operations in the data center," says Syed. "The primary reason enterprises are interested is that it significantly improves their IT operations by not needing to have multiple specialists to maintain their infrastructure.”

Getting started

Look at the driver behind your desire to adopt hyperconverged technology, Syed says. If it’s reducing capital expenses and increasing efficiency, hyperconverged infrastructure may be the answer. But if you need to run a geospatial Hadoop cluster, you may need to look at another technology for those types of analytics.

Begin small, taking into consideration the growth rate rather than the workload, at least initially, say Kato. “That’s the huge benefit of modular scalability. It grows with you.”

Start out with a specific appliance. “If you can afford to do it,” Kato suggests, “do a proof of concept.” With hyperconverged infrastructure, it's easy to have three nodes up and running in 15 minutes. Also, look for testimonials from users with an environment similar to yours.

If you’re already in a virtualized infrastructure, Kato says, check the ratio of storage to compute you are currently running. “If you’re well within that envelope, you’re good to go.”

 

Keep learning