You are here

You are here

5 network data types every security team should monitor

John K. Smith Executive VP and CTO, LiveAction

As new technologies such as software-defined networking (SDN) and network function virtualization (NFV) continue to change how networks are architected, it's become increasingly difficult for IT to get a complete picture of the entire network and to measure key performance indicators (KPIs) accurately.

To compensate, organizations often use individual tools to solve individual problems, which can result in tool sprawl. Or they rely on a single source of network data, such as the Simple Network Management Protocol (SNMP), which is no longer sufficient in today’s hybrid IT landscape.

To overcome these challenges, organizations are increasingly deploying network performance monitoring and diagnostic (NPMD) platforms that collect and visualize a variety of network data. The goal is to proactively manage the network from the core (data centers) to the edge (cloud or remote sites).

There are several types and formats of networking data, and each is useful for monitoring and troubleshooting in a different way. Each has pros, cons, and unique quirks, and the most effective IT teams monitor as many of these data types as possible.

Here's what data types you should be collecting—and why.

[ Learn how to optimize your Enterprise Service Management in TechBeacon's Guide. Plus: Find out in this Webinar if ITSM and ESM are one and the same. ]

Network telemetry data

This data is usually collected from networking devices at remote locations and transmitted to monitoring systems for off-net processing and analytics, usually around performance management. There are two primary sources for network telemetry data: flow data and SNMP data. 

"Flow" is a generalized term that includes both NetFlow and an array of variants such as sFlow, jFow, IPFIX, and so on. Each of these offers an effective view of Internet traffic across a network by providing useful performance data on each device and interface along the entire source-to-destination path.

Flow excels at tracking near-real-time path data for active notification and isolation of issues due to changes in the network.

For its part, SNMP data provides a polling methodology for network elements, with a subset of objects through an SNMP management information base (MIB) view. This provides data about devices, interfaces, CPUs, etc. for monitoring and collecting the network infrastructure status.

Although it's a good foundation for basic network up/down monitoring, SNMP typically does not provide detailed network information. This is important for analyzing the root cause of problems for application performance or many user experience issues such as quality of service (QoS) policies and tunnel performance.

[ Also see: Network functions virtualization: What it is, and why you need it ]

Synthetic testing and virtual software agent data

Synthetic testing is a method of understanding a user's experience with an application by predicting behavior. Cloud applications can lack visibility and performance data, needed to ensure that users are getting the experience they expect.

By using virtual software agents and collecting data from them, IT can continuously monitor these applications. This will ensure that the apps are delivering the latency and path quality needed to ensure optimal performance for end users.

[ Get up to speed on IT Operations Monitoring with TechBeacon's Guide. Plus: Download the Roadmap to High-Performing IT Ops Report ]

Application recognition data

Applications running in enterprise networks require different levels of service based on different business requirements. Therefore, having insight and data are vital to maintaining performance.

Network-Based Application Recognition (NBAR or NBAR2, which is the next generation of the protocol) offers a mechanism that classifies and regulates bandwidth for network applications on certain routers. This data allows network administrators to view the mix of applications in use on the network at any given time and decide how much bandwidth to allow each application, to ensure that available resources are used as efficiently as possible.

NBAR can extract data from protocols including HTTP URL, HTTP User Agent, and SIP URL, for export or classification. NBAR2 works with well over 1,000 applications, with regular updates through NBAR2 protocol packs, and it identifies applications regardless of the ports applications may be running on.

Finally, application categorization uses NBAR2 attributes to group similar applications to simplify application management for both classification and reporting.

[ See also: A developer's guide: Networking in the age of hybrid cloud ]

Application visibility and control data

Application visibility and control (AVC) data, another important source, incorporates several technologies—including application recognition and performance monitoring—into WAN routers.

Previously, network traffic could easily be identified using well-known port numbers, such as Port 80 for HTTP. Today, however, many applications are delivered over HTTP. Many applications—such as Exchange, voice, and video—use dynamic ports that are delivered over Real-time Transport Protocol (RTP). This makes them impossible to identify by looking at port numbers.

In addition, some applications disguise themselves as HTTP because they do not want to be detected. As a result, identifying applications by checking well-known port numbers is no longer viable. AVC data fills this gap.

AVC is tracked with a combination of metric providers, embedded monitoring agents, and Flexible NetFlow. AVC includes both TCP performance metrics—such as bandwidth use, response time, and latency—and RTP performance metrics including packet loss and jitter.

These metrics are aggregated and exported via NetFlow v9 or the IPFIX format to a management and reporting package.

APIs and packet capture data

Finally, when capturing data that's useful for isolating root cause, there are two data sources on which network operations teams heavily rely: application programming interface (API) data and packet data.

An API is a set of subroutine definitions, communication protocols, and tools for building software. In general terms, it's a set of clearly defined methods of communication among various components.

In today's SDN environments, the control plane is typically centralized with a management application and controller to define and push policies and configurations down to the devices and functions. An API integration with the management systems, and access to that data, provide a way for path and APP ID information to know the business class and traffic routing through the SDN environment.

[ See also: SDN + NFV: Do the enterprise benefits add up? ]

Many performance and analytics platforms use APIs to integrate with ticketing software for workflow optimization across incident management. When an alert is triggered, the analytics platform can automate the creation of an incident ID (trouble ticket).

The ticket, in turn, includes semantic information such as location, time, and the alert that triggered the incident; this reduces the wait time for the data to get into the hands of engineers eager to solve the issue.

Packet data

Actual data packets are the most granular data an engineer can evaluate. Capturing them and writing them to storage allows for detailed network troubleshooting .

This can help fix problems that can't be solved with just flow, or with other data. For example, a flow with high latency could have several root causes. Packet data allows IT to see if a particular application is causing that latency, whether a specific user is causing it, and how often it occurred.

Work toward complete visibility

There are many ways to gather data to measure the performance of networked applications, depending on what and where your resources are. Ultimately, you'll need multiple data sets to deliver a complete end-to-end view of the current state of your network.

Having this comprehensive visibility allows you to proactively manage network performance, isolate and fix problems more quickly, and better plan for larger transformation initiatives such as a software-defined WAN.

If your team cannot monitor all of these data sources through one consolidated network performance management platform, it's time to reconsider your strategy. Have a discussion with your IT team or service provider to evaluate which of these data types you can measure now, and how you can add the capability to measure the ones you are missing.

[ Learn how to roll out Robotic Process Automation with TechBeacon's Guide. Plus: Find out how RPA can help you in this Webinar. ]