Padlock and chain near credit cards in fishing hook showing phishing attack

Phishing Attacks Are Targeting People’s Emotions; It’s Time to Leverage AI to Help

Phishing attacks have always targeted people’s emotions. COVID has drastically amplified those emotions, and hackers have not missed the opportunity. During the pandemic, thousands of attacks are taking place every day, preying on people’s fears and uncertainty regarding the virus, their jobs and their future. COVID-19-themed phishing attacks now account for 30 percent of all phishing websites. Meanwhile, scammers increasingly pose as HR employees “informing” employees that they have been laid off, and others masquerade as banks offering special deals during the economic downturn, all the while steering the user to embedded malware or links to fake websites.

Up to 95 percent of cybersecurity breaches are the result of human error, according to IBM, and in times of crisis that number could soar as stress short-circuits our common sense. Yet our main cyberdefense resources – employee training and antivirus software – aren’t sufficient. Most phishing attacks don’t involve viruses or malware at all, and today’s email gateway security can’t stop hackers’ sophisticated trickery from reaching users.

Now more than ever, cybersecurity needs to bridge the human emotion gap left by anti-virus software and human training. We are always susceptible to deception unless technology intervenes – which is why AI can be our best bet.

Emotions play huge part in phishing, especially now

Humans are primarily driven by emotions, not logic. Our decision making is a direct result of how we feel, and emotions triggered by one event can impair our ability to make sound choices in another. In COVID-19 reports of anxiety, fear, and mental health issues means people are not as emotionally equipped to recognize when they are being taken advantage of.

Phishing emails over the past few months contain words like ‘COVID’, ‘coronavirus’, ‘masks’, ‘test’, ‘quarantine,’ and ‘vaccine’ to play on people’s concerns and socially engineer their reactions. Cybercriminals have falsely alerted employees that someone in their team has tested positive for the virus and they need to read instructions to keep safe, while this is actually a malicious attachment. But people are more likely to open the document because the content is rooted in a potential reality, overriding their ability to assess the email for what it is.

Human error is increasing

Over the past two years, the number of security breaches caused by people within an organization rose 47 percent. Of those insider incidents, 62 percent were caused by negligent employees, unintentionally costing companies $4.6 million per year in damages on average.

Those figures don’t account for increases in human error as a result of the lockdown and new work conditions. Mass layoffs mean heavier workloads while remote work means more distractions at home, opening up the likelihood of teams making mistakes. Not having IT support desks within walking distance adds an extra obstacle to addressing tech concerns, so errors are left to worsen.

Education is not enough

Our go-to cybersecurity solution is investing in employee education and awareness, but at this stage, it’s not effective enough. Employees are not in the right state of mind to participate in certain training, with security firms choosing to remove COVID-19 themed phishing simulations (which represent one in three fake websites) to avoid further traumatizing teams. Instead, companies are advocating for raising awareness, not panic.

We already know that phishing training isn’t as impactful as it’s perceived to be. In one phishing training study, participants received in-depth guidance and specific examples of phishing emails to avoid, yet three months later, they showed very little improvement in their susceptibility to scams. Ironically, one cybersecurity training firm recently found itself the victim of a phishing attack against an employee, in which 28,000 records were breached.

Although education can reduce how often employees click on harmful content, in large organizations, even a small failure rate poses a huge risk. Remember, a whole system can be compromised via a single point of entry.

We’re fooling ourselves into a false sense of security

Many organizations and individuals wrongly believe that having a firewall or a VPN eradicates the threat of a cyber attack. Actually, most phishing attacks don’t involve viruses or malware and so aren’t blocked by antiviruses. On top of that, 58 percent of all phishing websites use SSL protection to trick users into thinking the data they type into their browser is secure.

Cybersecurity tools also don’t shield against lateral phishing (also known as ‘leapfrogging’) which is when cybercriminals use hacked accounts to breach other – often more senior – accounts in the network. In these instances, phishing emails can be unwittingly sent from legitimate users, and so safely pass through security checks.

Blindly trusting standard or corporate security tools only lowers people’s guards, as people wrongly assume that there’s a safety net if they make a mistake.

Taking cybersecurity out of people’s hands

Human error and emotions will always hinder our ability to fully defend ourselves against cybercriminals. AI, however, avoids human fallibilities like work fatigue and emotional biases. In many tasks, it surpasses human performance entirely. Already, high-value target organizations, such as those in healthcare, are adopting AI to combat cyber threats.

AI can automatically detect, highlight, and block breaches caused by malware or phishing, so if a distracted employee makes a mistake, there are no negative consequences. AI has the advantage of “learning” from millions of real world cyber incidents. It can be trained on a variety of data sources including code, anomalous employee behavior, unclassified URLs, unusual formatting, and knowledge of new malware. Even if hackers attempt to hide damaging code in large volumes of filler code, AI can spot and thwart an attack.

Because AI can connect to data from all device endpoints in a network, it can predict how and where systems are most likely to be breached, and can augment under-resourced security operations. Additionally, AI produces faster response times to malware, which is why 69 percent of companies acknowledge that they can’t respond to cyber threats without the use of artificial intelligence.

In terms of phishing, AI today can provide real-time detection of malicious websites that would completely bypass traditional security stacks. Phishing websites are increasingly more deceptive but AI can utilize computer vision to ensure pages that would otherwise deceive end-users are detected and stopped in their tracks.

Elsewhere, AI-powered biometric logins is a sophisticated option for companies with a large budget. Biometric logins allow users to authenticate themselves via scans of their fingerprints, retina, and palms.

Technology like AI can fill in the gaps left by human emotions and unreliability. Even the smartest employees, when working around the clock, are highly susceptible to phishing attacks. At a time when people’s emotions are amplified and workforces are largely separated from their organization’s IT teams, we need to apply new technologies to help them where they are weakest.

As longer working hours and amplified human emotions expand opportunities for cyber attacks, there is a pressing need for AI that can provide more complete cybersecurity protection.