Computer code on a screen with a skull showing researchers using a few simple strings in code to trick Cylance antivirus into thinking malware as trusted software
Researchers Trick Cylance Antivirus Into Thinking Malware Is Trusted Software by Nicole Lindsey

Researchers Trick Cylance Antivirus Into Thinking Malware Is Trusted Software

It looks like artificial intelligence (AI) is not going to be the magic silver bullet solution to the malware problem plaguing the Internet after all. In a first-of-its-kind test, a group of Australian security researchers at Skylight Cyber proved conclusively that they could fool AI-based antivirus defenses into thinking that a particularly malignant piece of malware was actually trusted software. Even when it came to some of the most virulent malware or ransomware on the planet – such as WannaCry and SamSam – the AI-based BlackBerry Cylance antivirus program was completely helpless in stopping the malware.

How the Australian security researchers defeated the Cylance antivirus program

Even worse, it didn’t take a lot of human manpower to overpower the much-hyped AI Cylance antivirus software. Once the Australian information security researchers had successfully reverse engineered the Cylance antivirus software, it was really just a matter of inserting a few simple strings to the end of the malicious files and they could slide by unprotected. As the security researchers discovered, the AI-based Cylance antivirus was trained to have a systematic bias for and against certain pieces of computer code. Once the bias was determined, it was really just a simple matter of creating a universal bypass solution. As it turns out, the Cylance AI antivirus program was augmented with a series of whitelists and blacklists, and had begun to have a persistent bias in favor of certain strings from a popular gaming application. As a result, any malware that contained these strings were essentially given a free pass by the Cylance antivirus program.

According to Gregory Webb, CEO of Bromium, companies must do more to address the false notion that AI or any other technology is 100 percent effective: “The breaking news on Cylance really draws into question the whole concept of categorizing code as ‘good or bad’, as researchers were able to just rebadge malware as trusted – they didn’t even have to change the code. This exposes the limitations of leaving machines to make decisions on what can and cannot be trusted. Ultimately, AI is not a silver bullet, it’s just the latest craze in doing the impossible – i.e. predicting the future. While AI can undoubtedly provide valuable insights and forecasts, it is not going to be right every time and will always be fallible; ultimately predictions are just that, predictions, they are not fact. As this story shows, if we place too much trust in such systems’ ability to know what is good and bad we will expose ourselves to untold risk – which if left unattended could create huge security blind spots, as is the case here.”

For security researchers who have been predicting that artificial intelligence (AI) and machine learning (ML) are the future of antivirus protection, the complete and total demolition of the Cylance antivirus program has to be a crushing blow. Cylance was billing its protection product as one of the best and most innovative in the marketplace, thanks to its use of AI and machine learning to prevent threats. And Cylance had steadily built a reputation as one of the top endpoint security firms, based to a large degree on its endpoint Protect offering.

A global bypass of AI antivirus protection

As the Australian security researchers pointed out, this was not a case of a single exception slipping by undetected. This was truly a “global bypass” in that 100 percent of the Top 10 malware programs on the planet evaded detection, as did 83 percent of the Top 384 malware programs. And they did so in spectacular fashion. Simply by inserting a few snippets of the code to the very end of the malware code – perhaps the simplest, laziest and most naive approach possible, the Australian security researchers were able to transform scores as low as -999 into a near-perfect 996, or a similarly low score of -975 into a very impressive 984. In short, a few simple strings added to the existing code were enough to completely fool the Cylance antivirus.

There is a simple analogy to describe how the security researchers were able to defeat the Cylance antivirus program so conclusively each and every time. As the researchers themselves pointed out, training an AI antivirus program to recognize the difference between “malware” and “trusted software” is similar to teaching an ML program how to recognize the difference between birds and humans. Show a computer enough examples of birds and humans, and the computer will finally realize that birds have beaks, but humans do not. Thus, any time a computer recognizes a beak, it will assume that the object is a bird, not a human. But there’s just one problem with that – it becomes very easy for a smart human to fool a computer. Just put on a mask with a bird beak, and the computer will think that the person is really a bird. That’s what happened in the case of the Cylance antivirus – any time it saw a certain string of computer code, it assumed that it was a trusted piece of code from a popular gaming application. So all the malware had to do was put on its equivalent of a gaming mask, and it could slide by undetected.

The future of AI antivirus programs

Obviously, AI security researchers have a lot of egg on their face right now. Over the past 18 months, there has been relentless hype about how great all these AI-based antivirus programs are at stopping real-time threats. Some vendors (including Cylance) have even claimed that their programs could recognize and predict a new malware program even before it had been created by hackers. So it’s pretty embarrassing that the AI antivirus program couldn’t even detect a malignant piece of malware like WannaCry, which is perhaps the most famous piece of ransomware on the planet.

So what’s next? In the short term, Cylance antivirus researchers have promised a “hot fix” that can be applied immediately to all virus software worldwide. And they have promised to retrain their machine learning engine so that any systematic biases can be removed. The big question, though, is what will happen in the long term. This is not the end of AI-based antivirus programs, but it could mean a return to a “hybrid” solution, in which ML and AI programs do all the heavy lifting of detecting new and unfamiliar malware, while traditional antivirus programs do all the heavy lifting of blocking known malware in the wild (such as WannaCry or SamSam). Security researchers with the relevant user experience would analyze traffic to find exceptions.

The defeat of the Cylance antivirus program could also cause BlackBerry to re-think its new strategy focused on Internet of Things (IoT) security. The thinking behind BlackBerry’s acquisition of Cylance was that AI would be the key to unlocking IT security solutions. Now it looks like AI-based antivirus programs might actually introduce an entirely new attack surface. One thing is certain: in the ongoing cat-and-mouse game between hackers and security researchers, the upper hand remains with the attacker.