You are here

The ability to custom select and assemble a solution from a variety of modular components to meet a unique set of requirements has always been an attractive concept. Modular design and composable monitoring are now enabling optimal choices for businesses.

The era of composable monitoring is now: How enterprises can tap in

public://pictures/John Villasenor Moogsoft.jpg
John Villasenor, VP, Moogsoft

The ability to custom select and assemble a solution from a variety of modular components to meet a unique set of requirements has always been an attractive concept. After all, who doesn't prefer best-of-breed in whatever they do? Today, modular design and composable monitoring are enabling optimal choices in our personal and business lives. For instance, look at the way our mobile devices allow us to pick and choose from a huge collection of applications or the popularity of cloud services that help to selectively integrate and assimilate several cloud features into a cohesive whole.

It's little surprise that these concepts are also permeating the way that IT organizations choose to implement a monitoring strategy for service assurance. The latest disruption to the world of IT management is known as "composable monitoring." This trend emerged in the world of DevOps and tool chain design and later was coined by 451 Research to describe "a modularized approach to IT monitoring, in which an overall monitoring architecture is constructed by integrating a set of small and disparate components."

Historically speaking, the desire to deploy a comprehensive monitoring system based on best-of-breed components is far from new. But what had previously held IT folks back from executing this was the pressure from leading vendors to keep systems closed and to hinder availability of consistent, open application programming interfaces (APIs). As a result, legacy monitoring vendors with closed, monolithic solutions were the only viable choices, and they often succeeded in gaining widespread adoption by locking in their customers.

Fast forward to 2015: the ubiquity of web APIs, modular platform designs, and open source software has changed the game entirely. It's now possible to implement an enterprise-class, comprehensive monitoring system using the best-of-breed tools of your choice. This disruptive innovation has led us to the new era of composable monitoring, and it's now changing the way that large enterprises and service providers deliver optimal service performance and customer experience.

The benefit of moving from a monolithic to a composable monitoring approach is twofold. First, there's no single vendor that can solve all your monitoring needs. You want the best tools for the job, and it's just not possible for one solution to extend across every monitoring domain. Many vendors will show you a roadmap to expand their solutions, but execution always takes time, and functionality will be full of gaps.

Second, monolithic solutions are inherently resistant to change. Yet change in enterprise IT is happening at an unprecedented pace that is only likely to escalate. No one knows exactly what their monitoring needs will be in two or three years. But with well-defined APIs and mature integrations, new monitoring tools can be added to swap out individual pieces with ease.

Why are most enterprise monitoring systems still monolithic?

Legacy monitoring vendors have enjoyed selling monolithic solutions because it increases lock-in and switching costs for their customers. These vendors have sold largely based on scare tactics, proselytizing that the only way to get everything to work was to deal with one vendor that could pull everything together (despite the fact that the components of the monolithic system often were cobbled from separate acquisitions). Ignore best-of-breed, these vendors advised—one-stop-shop is always better.

Over the years of building and maintaining bespoke configurations and static programmed rules, these legacy vendors achieved lock-in with the majority of enterprises due to their closed and proprietary solutions. But this is slowly beginning to change. While it seems risky to replace an old tool that's tied to processes that haven't changed for more than 10 years, can your IT operations afford not to modernize? A whole new generation of open and well-supported monitoring tools have appeared to serve the hybrid, dynamic IT environments of 2015, making proprietary, monolithic systems a thing of the past. Furthermore, it's no longer clear that a single vendor, as opposed to several, is actually easier to manage.

Despite successful implementation at leading enterprises, composable monitoring is still viewed with skepticism by some organizations. This skepticism may be due to general reluctance to move away from a long-standing vendor relationship, as well as reluctance from the support staff who have spent decades keeping the wheels on the monolithic system. Legacy vendors often preach that the IT complexities introduced by data center virtualization, distributed workloads, and custom homegrown tools lead to poor integration of multiple monitoring tools and data inconsistency. Some have even gone as far as to refer to composable monitoring as "franken-monitoring" to throw fear, uncertainty, and doubt, and red herrings.

So how is it that many leading IT organizations are able to reap the benefits of a modern, composable monitoring system, while avoiding the disastrous consequences of franken-monitoring that legacy vendors warn of?

The dual-layer architecture

Leading enterprises have found that a dual layer approach is the secret to successfully implementing a composable monitoring environment. This dual layer approach consists of the instrumentation layer and the management layer.

The bottom layer in the architecture is the instrumentation layer for data collection and analysis. This is where the enterprise assembles a variety of domain-specific tools to gather all relevant data for analysis and remediation. It can consist of any number of tools, including tools for log management, application monitoring, server monitoring, cloud monitoring, network monitoring, and so on. While each of these tools serves a unique purpose and gives the organization a deep view into a piece of the IT environment, there's a massive volume of monitored data to look at among these numerous inputs. The typical solution is to grow ops and DevOps teams to keep up with the scaling volume, velocity, and variety of monitored data. However, a much more reasonable solution is to introduce a management layer on top.

This top layer is also known as the manager of managers (MoM) layer. While crucial, this layer is sometimes missing in tool architectures or is impaired by an outdated MoM, resulting in franken-monitoring. A MoM layer serves as the critical element that sits above all the composable elements, gluing everything together into a holistic solution. This enables situational awareness across the entire IT infrastructure, allowing teams to work across silos of people, processes, and tools.

The MoM isn't necessarily new. There are legacy MoMs, many built in the 1990s, designed to ingest the monitored data their probes gather using rules and models to try to reduce noise, filtering down to what is deemed most important. However, legacy MoMs are seriously limited by their fragile, rule-based approach, dependence on a 100-percent configuration model (no longer possible), and focus on monitoring infrastructure hardware. This approach is simply not change-tolerant enough to support modern enterprise IT, which can now face hundreds or even thousands of dynamic infrastructure changes daily.

This brings us to the next-generation MoM solution, the enabler of composable monitoring for the enterprise and the new era of IT. A next-generation MoM takes advantage of the latest innovations in machine learning, socialized collaboration, and open interfaces, making it agile for the dynamic and fast-paced nature of modern IT environments. A next-generation MoM sits above all your monitoring tools, ingesting all the monitored events and alerts, regardless of deployment or configuration. Despite the 10x increase in event data from just five years ago, the next-generation MoM scales, reducing event noise and finding event correlations in real time, giving you immediate situational awareness across the entire IT environment. Additional automation features like recycled knowledge access, ChatOps, and team-based data presentation all allow ops and DevOps teams to collaborate better to resolve incidents more effectively.

In essence, a next-generation MoM allows enterprises to overcome the complexities that come along with composable monitoring systems. These complexities revolve around the overwhelming data volume produced by multiple monitoring tools, the communication barrier between teams using different tools, and a lack of a single, holistic view of your entire IT environment. Through a focus on machine learning contextualization, team collaboration, information centralization, and a single interface for tools and activities, the next-generation MoM is transforming enterprise IT operations to scale and make use of composable monitoring systems.

Benefit from composability now

By introducing an overarching layer for monitoring orchestration—the MoM layer—enterprises can now take advantage of the composable monitoring approach that has taken the DevOps movement by storm. IT teams can now gain a holistic view with situational awareness of their entire IT environment. They can also access all the tools and features they need to make composable monitoring systems function the way that they were envisioned.

Composability is redefining how enterprise IT monitoring is done, and early adopters are realizing the benefits of this improved service assurance.

What are you waiting for? The time to go composable is now.