We often think about artificial intelligence (AI) in terms of the benefits it can provide by helping us complete tasks more efficiently. It’s important to remember, though, that this technology can be used just as easily for malicious ends. Today, both cybersecurity experts and cybercriminals are using AI. Could it pose more of a cybersecurity threat than we think?
Because AI can learn on its own and use that knowledge to complete tasks autonomously, it can help us complete work more efficiently, more cost-effectively, more accurately and with less hands-on effort. Those benefits apply to virtually every sector. They also apply to cyber attacks and other security threats.
Cybercriminals can use AI to automate aspects of their attacks, enabling them to launch attacks more quickly, at a greater scale and a lower cost. They may also be able to pinpoint their targets more precisely.
In a recent report, experts from academia, industry and the non-profit sector outlined ways in which AI could be used maliciously. One scenario described in the report involves the automation of social engineering attacks. An AI-powered system could collect information about a victim that’s available online.
Bad actors could even use a chatbot or similar application to interact with someone and learn about their preferences, interests and typical behaviors in the same way an AI-based virtual assistant like Siri or Cortana learns about you. The AI program could use this information to create links the target would be likely to click on based on their past internet activity. The system could even send the link from an email address that impersonates their real contacts and uses a writing style that mimics that of their contacts.
Hackers can also use AI to automatically uncover vulnerabilities in code and even create the code needed to exploit them. AI could also help hackers respond to their victims’ behavior to avoid detection.
AI technology could also allow cybercriminals to automate large volumes of web traffic through autonomous agents that imitate human behavior on the website to avoid detection. Cybercriminals could overwhelm the server in this way as part of a denial-of-service attack.
Additionally, bad actors could use AI to help choose their victims. An algorithm could analyze potential victims and choose the best targets based on how much money they have, their likelihood to click on malicious links and the likelihood that they have significant security vulnerabilities in their systems.
Researchers at IBM have developed a program called Deep Locker that demonstrates how AI and malware can combine to create potent cybersecurity threats. Deep Locker uses AI to hide malicious intent in unsuspicious applications and then trigger an attack once it identifies a specific target. It can identify this target through features on their computer system, geolocation and even voice or visual recognition. This shows how AI could help make cyber attacks much more precise.
Artificial intelligence can also be a significant boon to cyber defenders. It can help defend against attacks, including those powered by AI. After training on what normal activity looks like, AI-powered programs can help to identify unusual activity that may be a threat, whether that threat is in the form of malicious code, a phishing email or another attack. AI could also identify vulnerabilities in code in the same way cybercriminals could.
AI can also be used to power biometrics which can make authentication more secure. In addition to biometric attributes like facial, voice and fingerprint recognition, it can power behavioral biometrics, which involves identifying people based on how they interact with a device. Modern behavioral biometrics can identify as many as 2000 parameters on a mobile device.
Risks associated with AI-based cybersecurity
Some experts warn, though, that excessive reliance on AI for cybersecurity could cause problems. AI hype could cause security companies to launch AI-powered solutions before they’re ready, and companies could rely on them too heavily.
Cybercriminals could also target AI-based security systems themselves. They could potentially feed the AI system false data so it misses threats. If they can figure out how an AI program identifies malware, they could remove these aspects from their malicious code to avoid detection.
Recommendations for the future
AI technology is still under development, so we haven’t seen anywhere near the full extent of its applications for both cybersecurity and cyber attacks. To better understand these possibilities, we need more research in this area. Those researching and developing AI should consider the potential associated security risks and allow them to guide their work.
Companies using AI for cybersecurity purposes should also use multiple algorithms as opposed to one master algorithm. This makes it much harder for bad actors to get around AI-based security programs. If one is disabled, the others can still function and may be able to detect that the other algorithm was compromised.
AI is neither an inherently beneficial nor malicious technology — it can have either effect depending on how it’s used. It’s going to play a growing role in cybersecurity in the coming years whether we like it or not, so it’s important that we do everything we can to better understand how it will impact cyber defense and take steps to protect ourselves from potential threats.