You are here

You are here

App for that? IT monitoring tools are in need of a major upgrade

public://pictures/Jerry-Melnick .jpg
Jerry Melnick Chief Operating Officer, SIOS Technology

Too many IT monitoring tools are out of date. While applications are migrating to complex, dynamic virtual and cloud environments, many tools are stuck in a physical server mindset. This trend is driving the need for simpler, more intelligent tools to help understand and optimize the infrastructure for application service delivery.

Why are infrastructure monitoring tools so bad?

The reason behind this disparity is one of evolution and culture. Most management tools used for virtual IT infrastructures evolved from physical, server-based data centers built 10 or more years ago. They were built using the same UI frameworks and approaches that arose from earlier generations of client-server computing.

In the beginning, the purpose of these technologies was smaller in scope, and the environments they operated in were simpler. Their UI style and approach were never intended to manage the scale and complexity of the virtual infrastructures we operate in today, nor were they ever intended to adapt to these dynamic environments.

Today, IT administrators are using multiple tools that are complicated, highly manual, and limited in scope. Even many dashboards and other tools that use analytics are difficult to set up, access, and use.

Setup and configuration of each one can take days, and training can take weeks. They require continuous maintenance to respond to infrastructure changes. They can't match the load-and-go accessibility and automatic updates of mobile apps. Worse still, they're prone to generating "alert storms" and large volumes of poorly prioritized or disorganized data.

As a result, IT managers must consult and compare the output of multiple tools to assemble sufficient information to understand an issue. The analysis requires skill and experience, and the tools offer little in the way of automated intelligence. IT staff frequently find themselves inundated with overlapping, sometimes conflicting information, without an easy way to draw conclusions or specific guidance for resolving problems.

The core problem is that these tools are designed to monitor and manage a limited scope of the problem. They've evolved from computer science, focused on programmatic device monitoring and management techniques.

Simplicity throughout the user experience

In contrast, modern tools take a holistic approach using data science. They collect the vast amount of real-time machine data arising across the infrastructure from all components and apply sophisticated analytics to understand the inner workings of the infrastructure.

So, what is this new approach? First, it tackles the problem of complexity in virtual IT infrastructures at multiple levels by focusing on simplicity throughout the user experience, including where information is accessed, how it's consumed, and how the software itself is installed, configured, updated, and used.

Unleashing the power of machine learning

Advances in machine learning technology and a growing emphasis on information delivery, accessibility, and use are changing the way IT staff understand and make decisions about their virtual and cloud infrastructures. This technology makes it possible to design IT infrastructure analytics tools that deliver the same ease-of-use and mobility as consumer products. They're transforming massive volumes of data from "noise" into meaningful information and specific recommendations to improve performance, efficiency, and reliability in virtual infrastructures, and providing it to IT staff when and where they need it.

The most powerful aspect of new, machine-learning-based IT infrastructure analytics tools is their ability to analyze and synthesize massive volumes of data, and to provide IT with the information they need in a single click or tap. In contrast to the shortcomings of first-generation analytics, advanced machine learning technology automatically derives all the infrastructure objects and their relationships (compute, CPU, network, storage) to deeply understand the environment.

By uniquely coupling this with advanced machine learning analytics, these tools learn an application's normal resource consumption and behavior patterns in relation to all of its related objects and resources across the infrastructure. Anomalies are called out to expose issues, uncover the root causes, and provide recommendations to resolve them.

Advanced technology uses topological behavior analysis—a discipline of machine learning—to correlate complex behavior patterns of interrelated objects in the infrastructure to anomalies that may indicate serious issues. This approach identifies subtle changes in the inner workings of the infrastructure that may be early warning signs of a problem. It also enables these tools to extract and deliver the information that IT staff need in a clear, usable format.

For example, a tool might provide a dashboard organized around key quality of service dimensions like performance, efficiency, reliability, and capacity, along with topological mapping of the infrastructure. IT staff simply click icons indicating a potential problem, and the tool provides a detailed diagnosis of the root cause of the problem, recommends specific solutions to it, and predicts the results if recommendations are implemented.

Keeping up with change

Unlike physical server environments, virtual and cloud infrastructures are almost constantly changing. Virtual machines are created, moved, eliminated, configured, and reconfigured.

Traditional analytics tools require IT to set thresholds for key parameters, such as CPU utilization. If a parameter is exceeded, it sends an alert. These thresholds don't account for complex interactions between objects in the infrastructure, and they have to be manually adjusted to accommodate changes.

Machine-learning-based analytics tools add automated intelligence to the equation, not only to deliver the key insights IT needs to resolve issues but also to provide self-adjustment and self-configuration. This approach eliminates hours of tedious work and allows IT to proactively find subtle issues missed by other tools.

Mobile- and touch-enabled for easy access

Many of the changes to infrastructure are driven by the huge growth in mobile. Ironically, IT professionals use tools that are generations apart from the ease of use and information accessibility that consumers demand in their mobile apps. We've all come to expect intuitive interaction with our mobile apps, and we're frustrated when it's lacking. In contrast, IT tools aren't held to anywhere near the same high standards for ease of use.

This gap can be closed by moving to IT analytics tools that are both mobile- and touch-enabled, allowing IT to access the information they need from their desktops or any mobile device. IT can check the status of their virtual environment, identify problems, and ensure optimized operation of their critical applications as easily as they check the weather on a mobile app. The best of these tools will scale from a phone to a tablet to an 80-inch touch screen.

Mobile apps also provide a good model for how tools should be updated. Traditional tools usually have long development horizons that plan major updates, requiring delays in delivering new value and significant work by IT to accomplish the upgrade. By adopting the app store model, the most advanced IT analytics software enables IT to simply press an icon when a new release becomes available, enabling software vendors to receive valuable updates every four to six weeks with important new features.

A new hope

The key takeaway here is that you aren't stuck with bad tools. Vendors are leveraging advancements in machine learning and a new understanding of the importance of simplicity to deliver the next-generation approach to IT analytics that IT teams need to understand and optimize complex virtual infrastructures.

Keep learning