Artificial Intelligence (AI) in cybersecurity is an arms race on both sides of the battlefield.
Defending organizations utilize AI-powered email security measures to enhance network protection, detect advanced malware and ransomware, optimize critical data center processes, improve threat response times, and reduce human error. By automating tasks and driving efficiencies, AI can also alleviate the workloads of overwhelmed and understaffed security teams.
Unfortunately, threat actors have also identified the benefits of AI technology. Cybercriminals leverage AI to optimize brute force attacks, generate malicious deepfakes, propagate advanced phishing and malware campaigns, and increase the overall volume and velocity of their attacks.
As factors such as hybrid work structures, widespread staffing shortages, and tool sprawl continue to add complexity and chaos into a rapidly evolving threat landscape, it is critical that organizations get a firm handle on the prevalence of these attacks and proactively onboard measures to mitigate them.
AI vs. AI attacks
AI models are leveraged for both good and evil across the evolving cyber threat landscape every day. For example, AI models can poison one another with inaccurate data. By introducing benign files that resemble malware or creating patterns of behavior that prove to be false positives, threat actors can trick defending AI models into letting their guard down and marking attack behaviors as safe. Adversaries can also analyze and predict how tactics, techniques, and procedures (TTPs) are detected by defending AI models and then use that insight to subtly modify indicators and behaviors to stay a step ahead.
The explosion of natural language processing (NLP) technology in recent years plays a large role on both sides of the AI battlefield as well. Threat actors can deploy AI technology to glean information about a target that will inform their attack—who the employee’s boss is, who they might feel pressured into completing a task for, a superior’s writing style, common topics discussed in emails, etc. Armed with this information, AI systems then utilize NLP to craft a realistic, well-written email meant to trick the employee. The sophisticated technology allows cybercriminals to cast a wider net while still maintaining a level of personalization.
On the other end of the spectrum, however, many organizations are armed with the same NLP technology to detect anomalies in grammar usage, writing style, or communication patterns within inbound emails, which enables them to block messages and alert employees accordingly.
With rival systems going head-to-head with similar AI technology, the question simply becomes: which system is better at its job?
Effort and funding will separate winners from losers in the AI battle
Behind every AI system is a human attempting to achieve a goal. For defenders, that goal is to safeguard their organizations’ critical data assets and avoid the repercussions associated with a successful cyberattack. For threat actors, the goal is typically financial gain or sabotage.
These motivations are important when we consider the amount of time, effort, and money that goes into building an advanced AI system. It is no small task to create, configure, and maintain a custom AI model—never mind a complex, human-like system that can automate thousands of attacks simultaneously. The lower the perceived reward, the less time and effort a cyber criminal will put into building a sophisticated AI system. On the dark web, many cybercrime services sell for fewer than $500—and that amount is often only paid out if the attack succeeds.
Defenders, on the other hand, should be highly motivated to protect their critical data assets; according to the 2022 IBM Cost of a Data Breach report, the average data breach costs organizations $4.35 million (and $4.99 million for organizations where remote work is a factor).
3 ways organizations can defend against AI-enabled attacks
Organizations are in control of their own destinies when it comes to the effort and funding they’re willing to funnel toward strengthening cybersecurity posture. And considering both are key factors in separating the winners from losers, organizations should do everything in their power improve their defense efforts. Below are three actionable ways to fortify your security posture and defend against AI-powered attacks:
Foster executive buy-in. Effective cyber defense programs start with leaders in the C-suite who are committed to protecting their organization. A Word Economic Forum survey of global cyber leaders found that while 84% of respondents felt cyber resilience was a business priority, a much lower percentage (68%) saw it as a major aspect of overall risk management. As a result of this misalignment, many security leaders expressed they are not consulted in business decisions. When executives don’t prioritize cybersecurity at the organizational level, fewer resources and dollars get allocated to defense measures, resulting in more unchecked vulnerabilities for attackers to exploit. Therefore, it is imperative that the C-suite fully understands why cyber risk is business risk so they can implement proactive measures towards mitigation and governance. In order to foster cybersecurity buy-in from the top-down, the executive board must approve a sufficient budget to meet cyber priorities, integrate security experts into daily decision-making, and align cybersecurity with organizational business objectives.
Invest in the right technology. “Just enough” is never enough when it comes to cybersecurity, and in today’s rapidly evolving and complex threat landscape, a bare bones approach to cyber defense simply won’t cut it. With security established as an organizational priority, businesses should invest in best-in-class, cloud-based tools and technologies that effectively secure an organization’s most vulnerable attack vector: the intersection of business communications, people, and data.
Implement security awareness training. The human firewall often gets a bad reputation for being the weakest link in a cybersecurity program. But as the last line of defense, humans are also a critical asset in stopping malicious cyberattacks. Humans have a natural intuition that cybercriminals desperately seek to replicate in their NLP and AI systems, but it’s important to complement that human gut instinct with targeted cyber awareness training and an organizational culture that promotes cybersecurity as a team sport. Data shows that employees who complete awareness training are five times more likely to identify and avoid malicious links. As attacks become increasingly clever and targeted, it will be critical that businesses empower all employees to be vigilant cyber stewards.
Advanced technologies like AI will continue to be a growing driver behind both cyber defense and cybercrime moving forward. As adversaries jostle for the upper hand, it will be critical for organizations to keep a finger on the pulse of evolving TTPs and invest in the people, processes, and AI-powered tools that enable them to work protected.