Micro Focus is now part of OpenText. Learn more >

You are here

You are here

Chronicle puts AI into action on security: The time has come for machine learning

public://pictures/gil.jpg
Laurent Gil Co-founder, Zenedge
 

Alphabet, Google's parent company, recently announced the launch of Chronicle, an artificial intelligence-driven solution for the cybersecurity industry that promises “the power to fight cybercrime on a global scale.” The news may have come as a surprise to many—including some within Google itself.

Just last year, Google’s own Heather Adkins, director of information security and privacy, spoke at a conference and criticized the possibility of using AI in the cybersecurity industry. She argued that the implementation of AI relies too heavily on human-generated feedback and that companies should invest in more human talent and less technology.

To battle the constantly growing and ever-changing scope of threats from hackers, AI and machine learning are becoming vital innovations that can hold the key to combating cybercrime. Security experts are in need of solutions that can both adapt to and react to these threats in real time, which requires faster detection rates than existing security technologies and analysts are able to provide.

Today, major organizations such as Alphabet are realizing the potential of implementing AI and machine learning into their cybersecurity efforts, while others remain hesitant in adopting these tools.

Organizations that are resistant to change should consider the following three points.

 

 

Increasing manpower won’t solve the problem

A common belief is that increasing the size of cybersecurity teams will increase security and eliminate the room for error entirely. However, it is misinformed to assume that increasing the size of your staff will decrease the amount and severity of threats. Equifax, with a team of 225 security professionals, still suffered a major breach simply because one employee failed to deploy a patch.

Additionally, there isn’t enough cybersecurity talent in the workforce to go around. Large organizations such as Google can easily attract top-tier talent because of their stature and influence in the industry. However, smaller organizations are suffering from a shortage of infosec professionals in the market today­—and this disparity will only continue to grow. In fact, it’s predicted that the global cybersecurity workforce will be short 1.8 million by 2022.

Traditional security solutions—and the humans who use them—will fail

Many organizations today rely on off-the-shelf technology to secure their networks, applications, and APIs—yet one size does not fit all. There is a never-ending list of scenarios that influence the type of cybersecurity solutions enterprises must deploy to ensure their safety. These factors include the size of the company, location, number of offices, industry—right down to each piece of software used. In order to stay ahead of the curve, automation is key. Automation plays a major role in eliminating human error—and as more threat actors employ automation in their attacks, it becomes imperative to utilize the same tools as a defense.

Artificial Intelligence is cybersecurity’s hope

AI and machine learning are vital facets of the future of information security, and organizations that remain hesitant and fail to quickly innovate and adopt these tools will find themselves at a disadvantage—becoming increasingly exposed to more advanced attacks. AI is potentially limitless and can be smarter and faster than any human, but there is a misconception that AI is meant to fully replace security personnel. AI is meant to be implemented as a “supplement” to human talent, to empower security teams with the speed and agility to mitigate threats more effectively.

When it comes to preventing zero-day or new and novel attacks, AI operates as an added layer to a traditional security control, flagging potential threats that have not been seen before. If a request to a web server being inspected by a traditional web application firewall (WAF) does not match an existing rule or signature, does that mean that the request is not malicious? No, it doesn’t. Requests can still be malicious, and they may not get flagged by a traditional WAF that solely relies on rules and signatures.

However, an AI-enabled WAF may view the incoming request as unusual since it may not have seen anything like it before, or the request may look similar to something that it knows is malicious. The AI-enabled WAF would then alert the human operator that there is an anomaly that should be checked.

The human then determines whether the request is indeed malicious and trains the AI-enabled WAF to block exact matches or anything similar to the request it flagged. In this case, the human operator and the AI-enabled WAF work together to identify new threats, and the more the AI-enabled WAF is trained, the better it gets. This is not necessarily about a prediction, but more about an observation that the AI-enabled WAF makes to get ahead emerging threats.

Machines are your friend, security pros

It’s clear that cybersecurity teams of the future will be much more than just humans installing patches and relying on outdated technologies. Not only will the solutions evolve, but teams will evolve and include security intelligence analysts who can accurately and effectively analyze specific anomalies that are flagged by AI-driven solutions.

As the number of vulnerabilities and cyber threats rises at an ever-accelerating pace, organizations will have only one choice: adapt, or fall victim to an onslaught of complex, sophisticated, multi-vector attacks.

 

 

Keep learning

Read more articles about: SecurityInformation Security