Micro Focus is now part of OpenText. Learn more >

You are here

You are here

When machine learning is hacked: 4 lessons from Cylance

John P. Mello Jr. Freelance writer

Artificial intelligence (AI) has become all the rage in cybersecurity circles, but a recently discovered universal bypass of a machine-learning (ML) algorithm in BlackBerry's Cylance cybersecurity suite offers some valuable lessons for organizations mulling AI security solutions.

The bypass was discovered by researchers at Skylight, a firm founded by Israeli government security veterans Adi Ashkenazy and Shahar Zini. After a careful analysis of Cylance's antivirus product, the researchers discovered a bias toward a particular game.

They leveraged that knowledge to craft a universal method for bypassing the software by simply appending a selected list of strings to any malicious file. The method was 100% successful for the top 10 malware programs for the month of May—and 90% effective for a larger universe of 384 malicious applications, the researchers said.

Cylance has acknowledged that its ML algorithm was flawed, but it said in a company blog post that it was not a universal bypass. The company added that it had corrected the issue in its cloud service, and would do so with its endpoint software shortly.

Regardless, the lessons for security teams are clear. Here are four.

1. AI and ML can automate security tasks, but they are not set-and-forget

As Cylance pointed out in its blog, AI and ML models are "living models." They are designed to evolve and require periodic retraining and field servicing.

Cylance said that, rather than finding a universal bypass, the researchers had discovered "a technique that allowed for one of the anti-malware components of the product to be bypassed in certain circumstances."

Analyzing a file with ML is a multi-stage process, Cylance said. First the file is parsed, which extracts artifacts from it known as "features." These can be anything about the file that can be interpreted or measured. Those features are then passed to an algorithm for analysis.

"This vulnerability allows the manipulation of a specific type of feature analyzed by the algorithm that in limited circumstances will cause the model to reach an incorrect conclusion," Cylance said in the post.

"Machine learning remains the most effective tool in combating malware, which is why the technique has been nearly universally adopted by security vendors."
—Cylance blog post

2. AI and ML can shut down some attacks, but can also open new paths

AI and ML products depend on models that can become new targets for adversaries. "If you could truly understand how a certain model works, and the type of features it uses to reach a decision, you would have the potential to fool it consistently, creating a universal bypass," the Skylight researchers wrote.

Sohrob Kazerounian, a senior data scientist at Vectra, a provider of automated threat management solutions, said that AI systems create a necessity for new attack vectors. "As defensive capabilities are improved through the use of machine learning, attackers will have to respond by finding novel attacks that the newly ML-enhanced systems can't yet detect," he said.

While it is tempting to imagine a future in which super-intelligent AIs create defensive systems so secure that even the most advanced attackers are completely thwarted, he said, "the obvious result would be for hackers to respond with their own self-improving AI systems that learn to evade those defenses."

ML-based antivirus products are continuously trained, with hundreds of thousands of samples collected from the Internet every day, said Raffael Marty, vice president of research and intelligence at Forcepoint, a cybersecurity and behavioral analytics company.

"If an attacker can infiltrate that supply chain, that's going to be very dangerous because it can skew all the algorithms learning from those samples."
Raffael Marty

Skylight's research should serve as a reminder to security teams that cybercriminals have the capability and desire to evade next-generation antivirus tools, said Kevin Bocek, vice president of security strategy and threat intelligence, at Venafi, a maker of software to secure and protect cryptographic keys and digital certificates.

"We should all expect to see similar vulnerabilities in the future."
Kevin Bocek

3. Trust issues shouldn't be left to machines alone

The Cylance example exposes the limitations of leaving machines to make decisions on what can and cannot be trusted, said Gregory Webb, CEO of Bromium, an endpoint security company.

"If we place too much trust in such a system's ability to know what is good and bad, we will expose ourselves to untold risk which, if left unattended, could create huge security blind spots, as was the case here."
Gregory Webb

He said security needs to move away from prediction and detection models and toward incorporating application isolation within a layered defense scheme. Then, even if the malware executes, it will be cut off from doing harm. 

This allows teams to move away from worrying about whether code is good or bad, while ensuring that company assets are secure," Webb said.

4. AI and ML are no substitute for in-depth strategies

The idea that AI and ML alone can stop all threats is a dangerous misperception, said Vectra's Kazerounian.

"We should disabuse ourselves of the notion that a silver bullet even exists in the realm of cybersecurity. The only reasonable approach to understanding the status of AI in security is to recognize it for what it is: a tool in an arsenal from which cybersecurity professionals can draw."
—Sohrob Kazerounian

Multiple layers of defense are needed, even multiple layers of ML, said Fernando Montenegro, a senior analyst with 451 Research, a research and advisory company based in Boston.

If this attack shows us anything, it's that if you're using a single model to tell you something, then you may have a problem, he said.

"If you think AI is the answer to your security problems, then you don't really understand the totality of your security problem. AI and machine learning are effective tools, but they're just a piece of the solution."
Fernando Montenegro

ML has been an overwhelmingly positive development for information security, said Hyrum Anderson, chief scientist at Endgame, an endpoint security company. Signatures detect known and well-behaved malware families at rates approaching 100% with false positive rates approaching zero.

"Often neglected is the fact that old-school signatures actually perform better at the narrow role for which they were designed. So signatures have a place. And machine learning has a place. They should work together."
Hyrum Anderson

Don't believe the hype

There's no getting away from overuse of buzzwords, said Shahrokh Shahidzadeh, CEO of Acceptto, a provider of continuous biobehavioral authentication.

"We have seen this with digital transformation, blockchain, zero trust, and others. We need to vet the claims of how and why and, most importantly, test before and after impacts."
Shahrokh Shahidzadeh

AI is absolutely overhyped, said 451 Research's Montenegro. "We are placing too much faith in it without understanding how it works and what it can and can't do," he said.

Forcepoint's Marty added that for applications such as speech and image recognition, ML is perfect. But as soon as you get into control systems or detecting things such as security breaches, it's not necessarily the right approach, he said. "You have to use a combination of methods."

"With these algorithms, we feel like we can explain the world with data, but that's not how the world works. Not everything is explainable by data."
—Raffael Marty

Keep learning

Read more articles about: SecurityInformation Security