Artificial intelligence has, in recent years, developed rapidly, serving as the basis for numerous mainstream applications. From digital assistants to healthcare and from manufacturing to education, AI is widely considered a powerhouse that has yet to unleash its full potential. But in the face of rising cybercrime rates, one question seems especially pertinent: is AI a solution for cybersecurity, or just another threat?
How AI could improve cybersecurity
Over the past few years, cybersecurity has emerged as an important aspect for businesses across a wide range of industries, as more and more companies need to have a strong online presence. At the core of cybercrime trends is data – broadly considered as the new currency of an increasingly digital world, data is one of the most important assets for all types of organizations, and safeguarding it is a top priority. In their efforts to keep hackers at bay, cybersecurity experts have developed sophisticated data protection techniques, like data pseudonymization and data encryption. Data pseudonymization is a security process that sees critical data replaced with fictitious information that looks realistic. It is widely used by companies that wish to maintain referential integrity and statistical accuracy of sensitive data to minimize disruption of their operations. Data encryption is another popular technique that makes data impossible to understand for anyone who does not have access to the encryption key, thereby protecting it from intruders.
Recently, artificial intelligence has entered the game. Researchers and cybersecurity experts are harnessing its potential to create solutions that are able to identify and prevent hacker attacks with minimal human input. Using machine learning and AI neural networks has enabled developers to adapt to new attack vectors and better anticipate the next steps of cybercriminals. The expected impact of these applications is so great that 25% of IT leaders considers security the top reason for adopting machine learning within their organizations. Security as a reason is only surpassed by business analytics, which was picked by 33% of respondents, while 16% aim to use machine learning tech for sales and marketing and a further 10% for customer service. AI is not only good for beefing up security, but it is also good for business, as it can reduce the funds and time needed for manual, human-driven detection and intervention by automating the inspection process. It is also believed to be more accurate than humans, responding better to stealthy attacks and insider threats.
Is AI a threat when it falls into the wrong hands?
Yet AI can also become a real headache for cybersecurity professionals around the globe. Just as security firms can use the tech to spot attacks, so can hackers in order to launch more sophisticated attack campaigns. Spear phishing is just one example out of many, as using machine learning tech can allow cybercriminals to craft more convincing messages intended to dupe the victim into giving the attacker access to sensitive information or installing malicious software. AI can even help in matching the style and content of a spear phishing campaign to its targets, as well as enhance the volume and reach of the attacks exponentially. Meanwhile, ransomware attacks are still a hot topic, especially after the WannaCry incident that reportedly cost the British National Health System a whopping £92 million in damages – £20 million during the attack, between May 12 and 19, 2018, and a further £72 million to clean and upgrade its IT networks – and meant that 19,000 healthcare appointments had to be cancelled.
Against this setting, AI could be used to build new, more effective malware that will be able to learn and adapt to launch further attacks. Such a development could be devastating for cybersecurity defenses, as traditional protection tools like sandboxes could easily be fooled by a polymorphous AI-powered malware. This could become extremely important in the context of cyber warfare, especially in light of recent allegations for attacks on energy infrastructure by foreign powers. A recent joint report by the FBI and the US Department of Homeland Security has highlighted concerns that hackers associated with Russia were behind a series of attempts to infiltrate and inflict damage on critical infrastructure, including energy and nuclear sectors, the aviation industry, and water facilities. The hackers mostly employed attack vectors like spear phishing emails, credential gathering, and watering hole domains. Using AI could further boost similar attacks and lead to a new era in state-sponsored attacks and cyber espionage.
Last but not least, AI could prove a threat for cybersecurity in a more subtle way. As more and more companies are set to adopt AI-driven and machine learning products as part of their defense strategy, researchers worry that this could lull employees and IT professionals into a false sense of security. Yet lowering our guard in the face of rising cybercrime trends could be a fatal mistake. AI solutions are not 100% foolproof – as no cybersecurity solution alone ever is – so coming up with a comprehensive, multi-faceted strategy should remain a priority for businesses. It is also important that developers allocate enough time to conduct thorough data labeling on potential threats and foster the power of AI to continue learning without supervision.
More often than not, new tech developments like AI and machine learning are a double-edged sword. Whether they will prove beneficial in the long run rests on our ability to harness their potential properly.