Micro Focus is now part of OpenText. Learn more >

You are here

You are here

Put IT Ops analytics to work: 3 applications your team can start with

Mike Perrow Technology Evangelist, Vertica

Many businesses are putting artificial intelligence (AI) to use in practical ways, which means they're setting aside the hype and wild expectations of past decades. According to a recent Forrester survey, IT operations analytics (ITOA) is the No. 1 application of AI technology in business.

ITOA can make day-to-day IT Ops work easier. But taking advantage of ITOA for day-to-day operations doesn't mean taking classes in data science and machine learning, or writing fancy algorithms to fine-tune analytics capabilities.

A new report from TechBeacon, "The State of Analytics in IT Operations," suggests that, instead, IT Ops specialists should become familiar with the kinds of analytics being used in their industry, then start learning what capabilities are embedded in their tools. IT Ops teams that are part of an extended business IT team might eventually seek advice from analytics specialists in the organization—security, big data, and business intelligence teams, for example.

Here is an overview of the use cases for ITOA from the report, plus expert advice on the practices to consider adopting as you explore the capabilities embedded in today's tool sets.

3 common use cases for ITOA

Unlike the analytics used in online advertising (ad tech), digital gaming (as seen on Twitch.tv and elsewhere), or other forms of high-speed, big data analysis, IT operations analytics is focused primarily on what IT Ops teams have been trained to do for years. Primarily, that means three things:

  1. Controlling the cost of IT operations
  2. Performing root-cause analysis to determine the causes of problems
  3. Helping service desk teams manage the flow of information to customers regarding outages, bugs, patch management, etc.

Consider how these capabilities might improve your own IT Ops environment.

Cost control via analytics

If you work in IT operations, you know that most CIOs are under pressure to justify what they spend against the promised business value. The most common questions they ask are:

  • How do we lower the cost-to-performance ratio? 
  • What is the total cost of ownership (TCO) for our technologies?
  • How do we rightsize our resources?
  • How can we get the most from our outsourcing and contracting efforts?

Many IT Ops tools already help with performance, as well as monitoring and predicting spending, said Michele Goetz, principal analyst with Forrester Research. This is what she called "the block-and-tackle job of running and maintaining the platform, keeping the lights on, being agile to support business needs."

In the case of automated data warehouses, Goetz described how vendors have been steadily building and improving the patterns that come preconfigured in IT management tools: "Based on years of understanding how data centers run in the cloud, what those workloads are, all that understanding is built into the tools."

While these are more general, non-targeted modes of analytics, they eliminate the need for users to figure out their particular environment, she said.

"Vendors of IT Ops technology continue to learn how different types of workloads and administrative tasks inform how you're managing and optimizing those environments."
—Michele Goetz

Anomaly detection and root-cause analysis

Since the dawn of the industrial revolution, machine environments (factories, shop floors, even warehouses) have relied on worker know-how, along with meters, transducers, and other forms of telemetry to keep production at an optimal, normal state. It's no different in an IT Ops environment.

Today, training software tools to learn the parameters of normal operating conditions is a standard capability for operations teams. The goal is to define boundary conditions so the tools can quickly alert you when a critical function goes out of spec. And when that happens, you want those same tools to help you pinpoint which "thing" failed.

Even with sophisticated tools in place, the complexity of modern IT environments can pose thorny challenges when anomalies are detected, said Jeff Jamieson, CEO of Whitlock Infrastructure Solutions.

"What drives our customers crazy are events that they can't even imagine—based, for example, on a piece of infrastructure that no one has a clue was there."
—Jeff Jamieson

"The beauty of analytics-driven anomaly detection is that you don't have to know everything that might go wrong," he said. "While there are millions of log files that have captured what's going on in your environment, analytics can point you to three, four, or six areas that seem to be most relevant." 

So digging into a problem based on what tools can tell you—by pointing to specific lines of a log file, providing histograms that isolate specific areas of operations, even allowing time-based playback of recorded events—is how the life of an IT Ops team gets easier.

Easing your service management crises

The decades-long service economy has found perhaps its strongest example in the IT space. At least the "S" at the end of SaaS (software as a service), PaaS (platform as a service), and a host of other service-based procurement models means one thing: If you're provisioning software this way, you've got highly dependent customers.

And, invariably, they'll have issues. Managing your help desk with the best IT service management (ITSM) tools you can afford is how to keep your customers well supported and coming back for more of what you have to offer.

Analytics built into those tools is key to efficiency and repeat business.

"Good analytics leverages information across a number of different sources to help a worker opening up a ticket," Jamieson said. "The analytics engine can tell you, 'Wait ... we just had 15 other people log this same problem.' We're seeing our customers adopting machine learning as a way to drive down the time and cost of tickets."

As ITSM teams monitor the business services they provide to customers, analytics can show the anomalies within a targeted, single service. This is especially valuable in systems based on multiple chunks of open source code, whose release may not have followed a vigorous battery of tests.

If you're new to ITOA, stay curious

ChatOps, machine-learning basics, networking analytics: All of these are areas you'll want to explore as you get deeper into ITOA. Perhaps you're working in a midsize or large organization, and you're able to consult with other teams—a security, business intelligence, or big data team, for example—that can answer specific questions you have as you delve into analytics for IT operations.

That's great. But even if you work for a much smaller organization, or you just want to get your feet wet without diving into the complex world of analytics, you can most likely explore analytics with the tools already at your disposal, or that will soon be acquired.

If you know a tool is about to be acquired, ask the purchaser about the analytics capabilities you can expect from modern IT Ops tooling. And get up to speed on what's happening, analytics-wise, across the larger scope of IT operations tools by reading "The State of Analytics in IT Operations."

Keep learning

Read more articles about: Enterprise ITIT Ops