Micro Focus is now part of OpenText. Learn more >

You are here

You are here

Essential Guide: AI and the SOC—TechBeacon Special Report

Christopher Null Freelance writer


The cybersecurity battle is being lost on almost every front. In addition to the usual battery of threats aimed at organizations, automated, multi-phased attacks are becoming more common, and organizations are on their heels.

The good news is that hope is not lost. Many consider one of science fiction's oldest tropes—artificial intelligence—to be their greatest hope for shifting the balance of power.

A recent study by Capgemini Research Institute, "Reinventing Cybersecurity with Artificial Intelligence: The New Frontier in Digital Security," said that before 2019, only about 20% of cybersecurity organizations used AI in their technology stacks. But the report concluded that "adoption is poised to skyrocket," with 63% of organizations planning AI deployments by the end of 2020.

That shouldn't be surprising. Companies are seeing an unprecedented explosion in the volume and sophistication of cyber attacks, and there are no signs of the situation abating any time soon.

Security operation centers (SOCs) are overwhelmed by a fusillade of alerts. Threat actors are increasingly using automation to attack more quickly and effectively. At the same time, security teams are finding themselves with not enough personnel and with some staffers who are under-skilled.

The tedium of investigating and responding to this parade of known and unknown threats, along with the daunting odds of effectively containing them, is putting analysts at risk of burnout.

No wonder, then, that SOCs are turning to AI for relief.

Empowering the SOC with AI-enabled tools is proving essential to turning the tide on cyber crime. It reduces the amount of time analysts have to spend on repetitive, time-consuming tasks, so they can dive deeper into threat analysis and response. AI allows attacks to be identified and resolved more quickly, minimizing their impact and cost for the organization. And it positions analysts to work smarter rather than harder, providing a much-needed boost to productivity and morale.

However, it's important to understand that AI is not one technology but many. The transformation of the SOC is being driven mainly by one AI subset: machine learning. This form of AI automatically learns from data to make predictions, draw inferences, and discover patterns. This makes it uniquely suited to leverage the colossal amounts of security data that enterprises produce.

But bringing AI into the SOC is not as straightforward as procuring a tool set and waiting for the magic to happen. It requires careful consideration of your organization's business needs, your SOC's skill set, and the outcomes you hope to achieve.

Failure to adequately prepare will likely have dire consequences. Gartner predicts that through 2022, some 85% of AI projects will fail due to incorrect or mismanaged implementations.

In the following sections, you'll learn how AI is transforming the SOC. First up: Why AI and advanced analytics-specific tools are critical given the changing tech landscape. Next, you'll read about the key trends in AI and how AI is being applied to security and SOC teams. Lastly, you'll learn about the criteria for considering AI-powered tools for your SOC.

AI in the SOC: What it is and why it matters

When we consider everything the SOC is tasked with doing, the critical role AI can play within it becomes clearer. Broadly speaking, the SOC is responsible for centralizing security information for the entire organization, monitoring events in real time, and responding to security incidents. Any of these responsibilities can benefit from using AI.

But despite the hype surrounding AI in the enterprise, there is still a considerable amount of confusion about exactly what AI is and what its capabilities are. A bedrock understanding of AI is essential for determining how it can fit into your security operations and what results you can expect if you implement it.

AI can be best understood as an umbrella term that describes machines that are engineered to replicate a human's natural intelligence. Humans can speak and hear, read and write, move through our physical environment, perceive input from the world around us, convert that input into knowledge, and problem-solve.

AI analogs exist for each of these human behaviors. Natural-language processing, for example, helps computers understand and interpret human languages. Computer vision helps them see and process visual information. Pattern recognition allows them to identify groups of similar objects.

Machine learning is key

Machine learning (ML) is perhaps the most popular and widely implemented field of AI. At its most basic, ML uses algorithms to autonomously process massive amounts of data, learn from it, make predictions or determinations based on what it has learned, and improve its performance over time.

When used for security purposes, ML is generally classified as one of two types—supervised and unsupervised—each uniquely suited to particular use cases.

In the supervised model, an algorithm is guided using a dataset that includes examples with specific outcomes for each one. The algorithm is told what variables to analyze and is given feedback on the accuracy of its predictions. In this way, the algorithm is "trained" using existing data to predict the outcome of new data.

Conversely, unsupervised ML explores data without reference to known outcomes. It's best used to identify previously unknown patterns in unstructured data and group them according to their similarities.

The tooling landscape

Security tools have become increasingly sophisticated to keep up with the rising tide of advanced cyber attacks. Security information and event management (SIEM) tools remain foundational for SOCs, providing a centralized way to identify, monitor, analyze, and record security incidents in real time. These tools aggregate, normalize, and apply analytics to security data from across the organization to discover security incidents and alert the SOC team to them.

However, SIEMs’ inability to detect manually instigated attacks and their reactive, alert-based approach makes them inadequate on their own. To shore up their defenses, SOCs often integrate their SIEMs with other, more specialized tools.

Security orchestration automation response (SOAR) is often a first step. Tools in this category can automate critical security processes such as investigating phishing messages, managing cloud security, and detecting compromised user accounts. Other tools address endpoint detection and response, network detection and response, vulnerability management, and threat intelligence.

These tools vary from centralized investigation platforms that are used by the whole team to specific tools that an analyst may use just within his or her investigation workflow. Tools may also include tasks such as threat intelligence that can be consumed as external services, said Fernando Montenegro, principal analyst for information security at 451 Research, now part of S&P Global Market Intelligence.

Successfully integrated, these tools limit blind spots and improve coverage of the organization’s attack surface.

Of course, the best tools are useless if not applied correctly. Businesses must make sure they have the knowledge, resources, and strategy to combine these tools into a sophisticated SOC.

The modern SOC has three phases—visibility, management, and response—said Cody Cornell, co-founder and CEO of SOAR provider Swimlane. To have visibility, you need to have telemetry from the on-premises and cloud system, network, and users to see what is happening in the environment. "This is generally a combination of monitoring data that is aggregated into some log management system that has a strong search capability," he said.

Once you have strong visibility into your systems, you need to be able to manage both the information from your monitoring system and external information such as intelligence and notifications from various sources—the Information Sharing and Analysis Center, government sources, partners, and vendors.

"If there is no consistent and iterative management system for information you are always going to be searching for the things you need, and it causes considerable context switching, which is inefficient," Cornell said.

Finally, once you have visibility and you're managing internal and external information well, "you need to be able to take action on it," he said. Your response can take many paths—including remediation, recovery, mitigation, and investigation. The tools to help you take action can be the same tools that you use for visibility, "if you selected tools that have the right features and you make those features available via APIs to be better centrally instrumented," Cornell said.

Other ways AI can help the SOC

SOC face multiple challenges. Some 63% of organizations say that conducting cybersecurity analytics and operations in general is more difficult today than it was two years ago, according to a recent Enterprise Strategy Group survey. Organizations cite as contributing factors the rapidly evolving threat landscape, the increasing volume of cybersecurity telemetry data, and the increasing volume of alerts.

Most security operations try to address these problems manually, by throwing people at the tidal wave of alerts and hoping they are skilled enough to do a good job, said Chris Triolo, vice president of customer success at security vendor Respond Software.

But there are several factors that make this a less-than-ideal strategy. These include:

Staff shortages

The dearth of human capital in the SOC is a crisis. Each year, 40,000 US cybersecurity jobs go unfilled because the demand exceeds the pool of qualified talent, according to government data. That leaves too few people to keep up with the battery of alerts and attacks that the average SOC faces daily.

"As an attacker, all I need to do is find one way in, and I'm in," said Mario Daigle, vice president of products at Interset, a provider of security-analytics solutions now owned by Micro Focus (TechBeacon's corporate parent). "As a defender, I have to block every possible point of entry—which I would argue is impossible."

As a result, the deck is "completely stacked against the good guys," Daigle said. Without the ability to use AI, you don't have enough humans to find these threats and exploits. "It's guaranteed someone is going to find a way in. When they do, your only hope is having machines help your humans be more efficient; otherwise you're not going to find them."

Repetitive tasks

People are good at cognitive tasks, but not at repetitive, high-volume activities—"and there are a lot of the latter in the SOC," Respond Software's Triolo said. Machines, on the other hand, excel at these activities.

Using AI and ML to analyze and triage security data provides more depth and consistency than even the best human analysis can match. Relieving SOC team members of this mundane activity, Triolo said, frees them to focus on hunting for real attacks and chasing down the attackers. It's more engaging work that can go a long way toward preventing the burnout issues most SOC teams grapple with.

Lack of technological expertise

Like many other organizations nowadays, "there is a severe resource constraint at the same time that there is an almost Cambrian explosion in the technologies that are being used" in the SOC, said 451 Research's Montenegro.

Indeed, more than half of respondents in another ESG study said their organizations don't have the right skills or staff size to keep up with SecOps and analytics. AI-based security solutions can operate 24/7 and automate much of the work of lower-level analysts, allowing for fewer and lower-cost SOC personnel while significantly reducing the time to detect and remediate threats.

Top trends in AI, and what AI means for your SecOps team

Currently, there are three main trends for AI usage in the SOC.

1. Malware detection

Malware detection is perhaps the most popular application of AI in the SOC. Historically, detection has focused on malware signatures, but this activity has become increasingly difficult as those signatures have grown more dynamic over time. Signature-based detection is also unsuited to first-of-a-kind malware such as advanced persistent threats. With no previous example to reference, a human would essentially have to notice some anomalous behavior to discover this type of malware.

In many SOCs, AI has stepped in. Armed with a decade's worth of malware examples, teams can use supervised ML to train detection algorithms to recognize malware based on behaviors rather than signatures.

For example, some activity may look like malware because it has moved from this directory to that directory and then reached out to the network. Once deployed into the environment, SOCs find they can detect more malware than they could before, according to Stephan Jou, CTO of Interset.

Malware detection is the most mature and successful application of AI in the SOC, Jou said, largely because the security teams as a whole has had much experience with malware to learn from. AI does not replace signature-based malware detection, but complements it and makes detection much more effective overall.

2. User and entity behavior analytics (UEBA)

With their system-centric focus, SIEMs have a harder time detecting manually executed attacks than those implemented by malware. The anomalies that signal these insider threats—an employee logging into a server he's never accessed before, for example—typically have to be spotted by a human analyst. That kind of serendipity is in short supply in resource-strapped SOCs.

AI is filling in the gaps here, too, employing UEBA. UEBA uses a technique known as anomaly detection, which is driven by unsupervised ML.

UEBA monitors the behavior for every user within the organization and learns what is "normal" so it can recognize activity that is "abnormal." A typical example would be recognizing that John, who never logs in after work hours and always accesses machines A, B, and C, has suddenly logged in at 2 a.m. and accessed machines E and F. It can then process this information and determine whether the behavior is prelude to an attack.

The benefits of behavioral analytics are clear; with them the SOC can better detect hacked insider threats, privileged accounts, and brute-force attacks. However, UEBA adoption slowed early when tooling vendors overpromised on results. As companies have realized that UEBA, and AI in general, complement existing technologies rather than replaces them, it has been more widely embraced in the SOC.

"The use cases are starting to come out, and the customer stories are real now," Interset's Jou said. "I hear more and more companies saying they’re finding things they couldn't have found before with traditional tools."

3. Threat hunting

AI is also having an impact on threat hunting. That may at first seem counterintuitive when you consider how much this proactive technique relies on the human brain. Highly skilled security professionals hypothesize potential attacks in their heads, based on a vast knowledge of the threat landscape. Then, working under the assumption that attackers have already penetrated the system, they attempt to detect, isolate, and contain or eliminate sophisticated threats that slip past automated defenses.

But threat hunting relies on analyzing huge volumes of security data, and virtually all of it can be automated using a combination of traditional and cutting-edge technologies, according to Interset's Jou. Threat hunting is probably the most immature of the AI applications in the SOC, he said, but it's the skill that's most in demand.

"Most SOCs don't have threat hunters; they need to contract that out" or use a managed service, he said. "If you have one or two on your team, you're very, very lucky."

There are still tools that "we need to build to completely automate the things a threat hunter would do to make them more effective," he said. But this AI application "is the one that has the most promise."

Future applications

ML is what "a lot of people in security think about when they think of AI," Jou said. "But AI is actually a lot more. There are things you can do in AI that go way beyond just learning things and finding patterns that are weird."

He sees the SOC employing AI for optimization and recommendation. For example, many cellphone providers currently use AI-enabled customer service systems that look at the caller's case history, including how many complaints they've made in the past and their severity. Then they recommend a course of action to the customer service rep that's been optimized for customer retention.

He sees a similar approach being applied to threat hunting, where an AI-based system could look at the worldwide history of attacks, make a determination on a threat in that context, and recommend how best to contain that threat.

Jou also foresees natural-language processing playing a bigger role in threat hunting. "Remember," he said, "the threat hunter has made a hypothesis in his head, then he's having a dialogue with the data he has around him to try and figure out what might be happening in the organization. I would love for that dialogue to be a much more natural interface, almost like a conversation with the data as opposed to a building a query syntax." 

Technically, "we can already create that conversational interface; we just haven't done it yet," Jou said.

AI's eventual influence on the SOC

Given how dramatically AI is redefining cybersecurity, it's a given that it will impact the makeup of the SOC and how it conducts its daily business. The debate is over how deep and broad that effect will eventually be. 

The industry agrees on one thing: AI is not meant to replace workers en masse. All the AI applications discussed here augment the efforts of the SOC team and help them to be more efficient. AI is also not meant to replace the SIEM or any other tool that's already proving effective.

But while AI won't replace jobs or commodity tools, it can, and probably should, replace tasks. In fact, when it's implemented wisely, AI can help the SOC reclaim and reallocate resources. For example, monitoring, prioritizing, and triaging security alerts, the basic duties of a Tier 1 analyst, can be performed just as well by automation and orchestration technologies, which also scale more easily.

That allows organizations to move their Tier-1 analysts up into Tier-2 roles, such as incident analysis and response. Respond Software's Triolo said this increases the security team's capabilities without increasing its budget.

Similarly, tools such as anomaly detection don't replace the need for human vetting but ideally allow security engineers to focus on fewer and more targeted incidents and to waste less time digging through logs and alerts, said Kenny Daniel, founder of ML vendor Algorithmia.

A need for more SOC data science

AI tools will shift the balance of the SOC in another way, Daniel said. Teams will need stronger data science representation to manage their new ML tools and to understand the data and signals available, as well as to help engineer defenses for their particular systems.

Generally, this will mean hiring people with higher levels of training or require additional training for existing team members. But "while that may come with a higher cost in the beginning," using these tools to help streamline your response might mean you need fewer people to do the work, said Ryan Schonfeld, founder and CEO of RAS Watch, a security incident-management tool.

Another option for this is outsourcing your SOC functions—either some of them or all—to a company that already has these tools in place to reduce cost and increase value for the company.

Ultimately, AI doesn’t radically change the skill requirements for the SOC but instead maximizes the ones it already has on hand.

"If AI is effective, it also frees up analysts to do higher-order thinking, applying judgment, organizational knowledge, and relationships," said Isaac Kohn, a risk advisory principal at Deloitte. "So those skills are more important than ever."

How to select the right tool

It's tempting for SOCs to start their AI transformation by acquiring the trendiest tools. But you should employ a more strategic approach to ensure a successful result. Here are the essential factors for tool selection.

Start with the problem you're trying to solve

It can't be said enough that choosing the right AI solution should start with clarifying the issues your SOC struggles with most. Are you missing breaches? Do you have a malware problem? Did a rogue employee delete some important data? Are your analysts wasting time on repetitive tasks at the expense of more business-critical activities?

AI can help with all of these issues, but they require different types of tools, and throwing the wrong one at a problem will inevitably lead to wasted time and resources. So, for example, if you're looking for a malware-detection tool, don't confuse that with network threat analysis or risk mitigation. 

Take the time to reflect on which security issues you're most concerned about—and, specifically, what you're being asked to report to leadership about—as a way to gain clarity around how AI can help you, Interset's Jou recommended.

Make sure you have the data

Data is the fuel of AI. The larger the quantity and the better the quality of the data you put in, the faster and more accurate solutions AI will produce. As such, security leaders must ask the following questions about their data to maximize their AI efforts:

  • Do you have the data? The goal is to have enough to represent every context that a system may encounter.

  • Is it "good" data? AI is only as good as the data you feed it. That means it should be accurate over a long period of time, with few inconsistencies, errors, or corruptions.

  • Is the data labeled? Supervised ML requires data to be labeled in order to identify the properties or characteristics that can be used to train the algorithm. 

  • Is the data up to date? The threat landscape evolves at lightning speed, so current data is essential. A system trained using old data will struggle to detect the latest threats.

  • What is the source of the data? Make sure the data you intend to use is from a trusted source. Data from a suspect source may be inaccurate, corrupt, or even perverted by data sample poisoning, when a malicious user injects false training data to thwart the learning model.

While it's common for organizations to find it has unclean or unlabeled data, those hurdles can be overcome with the help of third-party services. If, however you find your organization doesn't have the required data or the skill to understand it, Interset's Daigle said, you're at an earlier stage of maturity and you may have to take a step back before you start buying tools.

Decide whether to build or buy your AI

Here again, 451 Research's Montenegro said, you need a clear understanding of just what your business requirements are. Do you have specific needs related to your organization, such as compliance, complexity, risk tolerance, different threat models, specific resource constraints, or other considerations?

These answers are essential for your organization to determine whether you're going to undertake AI tool development yourself or if you're going to work with an AI-powered tool from an external provider—or something in between, he said.

Once you have your business criteria nailed down, weigh them against the relative pros and cons of building versus buying an AI solution. 

Building AI in-house

A do-it-yourself approach allows you to build an AI solution tailored to your organization's needs instead of paying for a package that likely includes features you'll never use. It offers greater flexibility, allowing you to modify it on the fly and optimize it for future needs.

The downside is the time commitment and skills required. Deploying the system, training the data, reducing false positives, and the other considerable hurdles may require years of effort and come at a great cost. And that's assuming you have the expertise in-house, which most organizations don't.

Buying packaged software

Many of the advantages of buying an AI solution are the same that come with working with any third-party service. The vendor shoulders the infrastructure and maintenance expenses, as well as many of the security responsibilities, and sometimes ensures compatibility with other software.

This approach also provides the AI expertise most organizations lack and can leverage best practices from other clients to improve the implementation and performance of the product.

However, as is the case with any emerging technology, many vendors merely bolt AI onto their existing products, and almost all use their customers for research and development.

This can create a learning curve that can lead to lost time and money, especially in the short term. Feature bloat is also a potential problem for those organizations that need more customized solutions.

Other factors to keep in mind

Whichever route you decide is best for your company, it's fundamental to have serious, realistic expectations about AI tools' capabilities and applications.

"Any tool that deals in absolutes or near absolutes warrants a lot of suspicion," 451 Research's Montenegro said. The tooling should match the expected usage.

If yours is a SOC team that prefers to experiment with building your own tools and models, for example, choosing a tool that only uses AI internally and delivers the results to you may not be what you're looking for, he added.

The reverse is also true, he said: Choosing an AI development platform that "expects you to do the heavy lifting" in terms of creating the data pipelines, the models, etc., "when all you want is results, is not going to help you either."

Next steps

AI is transforming the SOC. As cyber attacks on organizations become more organized and efficient, augmenting your team's analysts with AI-enabled tools gives you the best chance of gaining the upper hand.

Consider these steps to put your AI plan into action:

  • Start with the problem you want to solve, not with the “cool tool.” Understand that tools alone don’t solve business or security problems. Many organizations invest repeatedly in emerging tools—including AI—without fixing the operational or organizational issues that prevent them from achieving their desired outcomes.

  • Think realistically about how far you want to go with AI. Do you want to build your own tools and models, or do you just want to consume results? If the former, make sure you have the time and resources to commit to building and managing your own solution. If the latter, make sure you have the expertise, such as a data scientist or other tech staff, to vet vendors and evaluate their offerings.

  • Determine whether you have enough data within your organization to properly train the models that you're going to use or if you would you be better served by an external provider that has access to other customers.

  • Make sure the basics are in place—namely an established control fabric and sensor grid. A good foundation for security operations is a log management strategy that allows you to store, search, and analyze data for investigations.

  • With the basics in place, you can focus on implementing security automation and workflow. This will increase the organization's ability to isolate high-priority incidents, while maximizing the efficiency of the analyst team. Team leads should focus efforts on automating areas that are major time burdens on the staff or areas where humans don't typically perform well.

Stay focused on your objectives and be thoughtful about how to measure them. Different organizations have different priorities, whether it's reducing the number of successfully executed attacks or reducing costs by being more effective with the security resources already in place.

Read more articles about: SecurityInformation Security