Micro Focus is now part of OpenText. Learn more >

You are here

You are here

The state of SIEM: Rocking security in the DevOps age

Jaikumar Vijayan Freelance writer
Data cables

Modern application development and deployment environments, including continuous delivery and the cloud, have created new complexities for Security Information and Event Management (SIEM) tools. But security organizations that integrate operational data from the myriad different data streams in these environments will continue to derive value from their SIEM investments.

SIEM is a unified way to ingest logs, do correlation, integrate threat intelligence, and spur an automated or human response to a security threat, according to Daniel Kennedy, an analyst with 451 Research. "I don't know that the speed of application development [via Agile, DevOps and other means] affects the value proposition for SIEM," he said.

SIEM platforms can bolster an organization's ability to detect and respond to security threats. They work by collecting and aggregating log and security event data from security systems, applications, and other network sources, and applying rules to the data to detect suspicious behavior.

Despite some reservations about the cost and complexity of the technology, organizations are using SIEM to monitor malicious behavior and to centrally manage logs for compliance reporting purposes.

The impact of DevOps and continuous delivery

Many SIEM systems were originally designed for monolithic application environments with a fixed network perimeter. But the proliferation of DevOps, continuous delivery, microservices architectures, containerization, and cloud deployment models have dramatically transformed the environment in which SIEM systems need to operate.

As a result, a growing number of organizations have begun pushing out applications and updates on a near-continuous basis. And they're harnessing flexible cloud and hybrid infrastructures to support these development and delivery models.

But leveraging SIEM in these environments can be challenging, considering the number of moving parts that need to be monitored and for which rules need to be written, according to George Gerchow, vice president of security at Sumo Logic.

A container, for instance, can get deployed, exist within an environment, and go away before a SIEM tool has an opportunity to capture where the container was deployed, whether it was good or bad, or if it had any security vulnerabilities.

Similarly, developers are deploying a growing number of applications as a set of microservices that communicate with each other via application programming interfaces (APIs). But many traditional SIEM tools have a hard time discovering and monitoring APIs, Gerchow said.

The cloud over SIEM

The cloud IT infrastructures on which many modern applications run are complex, multi-layered, and consists of software, hardware and storage systems. Each of these components can generate substantial amounts of data that need to be collected and aggregated, especially if an organization wants to correlate events in the cloud with on-premises events.

Using a SIEM platform to monitor and manage all data sources, and writing rules that can identify problems in these environments, can be a huge challenge, Gerchow said.

To solve the problem, some organizations take a hybrid approach. They rely on cloud service providers or on purpose-built, SIEM-like cloud tools to monitor and analyze cloud workloads while they continue to use traditional SIEM for on-premises environments. The approach makes sense for organizations that have little to gain from correlating events in their cloud and on-premise stacks, Kennedy said.

Others feed logs from cloud-hosted services to an on-premises SIEM tool, where they do the analysis. This has value" if a customer already has a significant SIEM investment on-site and believes there will be cases where correlating logs from both sources makes sense, Kennedy said.

Going forward, organizations will increasingly gravitate towards a single SIEM management console across both their cloud and on-premises environments.

"Things aren't there yet, and security people are doing the best they can with what's available right now, but directionally that's where I would say forces are pushing."
Daniel Kennedy

There are other considerations as well. Depending on your cloud provider, you may not have access to some sources of log data. For example, you might not have access to the firewall in front of your AWS or Azure server, said Matt Watson, founder and CEO of Stackify.

Applications hosted on a platform-as-a-service infrastructure with services like Azure App Services sometimes do not have full access to the operating system. This may limit some of the types of log data available to your SIEM system.

In addition, SIEM solutions need to be easy to install and able to dynamically identify and collect data from new data sources.

"[In a continuous integration and continuous delivery (CI/CD) environment], every time I deploy my application it could provision a brand new server," Watson said. So, the SIEM service needs to be installed on the image of that server or know how to discover and connect to it remotely as soon as it comes online.

The data integration challenge

Traditional SIEM was built to help organizations understand their environment and to identify malicious or anomalous activity. That function has only become more critical as organizations began using modern continuous integration and continuous delivery (CI/CD) toolchains and DevOps approaches to update development environments and applications faster, according to Russ Spitler, vice president of product strategy at AlienVault. "The dynamic nature of a modern environment has a constant impact on the function and dependencies of the services running within."

The most common roadblock to any SIEM deployment in this environment is the ability to integrate operational data into the platform. "At first glance, this challenge appears greater in a modern, dynamic environment, but in reality the opposite is true," Spitler noted.

Here's why: CI/CD models make it relatively simple to change the operational environment with confidence. For example, a development organization could use the CI/CD pipeline to instrument the environment for log collection, substantially improving the SIEM platform in the process, he said.

"With the adoption of modern development methodologies and deployment frameworks, it is critical that your SIEM can play nice."
Russ Spitler

That means making sure your SIEM platform can discover new assets in a highly dynamic environment. It also means ensuring that your SIEM supports the integration of logs from the CI pipeline and can monitor hybrid environments spanning on-premises infrastructure and cloud environments such as AWS and Azure.

"The combination of these capabilities makes your SIEM platform capable of keeping up with the dynamic nature of the new DevOps world."

It's all about your log data

The main role of SIEM is to capture data from different sources and apply rules that can detect problems. DevOps and CI/CD practices can enable dev teams to develop and deploy applications more often, and microservices break up monolithic applications into smaller components. But these applications still produce the same kind of logs and data that SIEM platforms have been collecting for years.

CI/CD just means people are deploying more often, and microservices may introduce more logs and more complexity to track how the applications all connect to each other. But, Watson said, "I don't think that matters much to SIEM."

Flexible and fast doesn't necessarily mean something is unmanaged, 451 Research's Kennedy agreed. Modern environments such as containers can challenge the ability of SIEM systems.

"[It's] a question of working out what you want to monitor and working out how to do it."

In essence, the speed at which you can ingest data and generate relevant alerts is the speed at which your SIEM tool can identify problems, he said.

Keep learning

Read more articles about: SecurityInformation Security