Hand touching circuit brain hologram showing generative AI and cybercriminals

Generative AI Is Being Weaponized By – And Against – Cybercriminals

Entering the final quarter of 2023, I can say with confidence that one of the hottest topics for cybersecurity and many other sectors this past year has been the rise of generative AI. For our part, cybersecurity practitioners have uncovered many ways that the predictive technology can benefit security teams in the unending effort to keep cybercriminals away.

But every silver lining has a touch of gray and unfortunately, that also means that threat actors have been swift to adopt generative AI as the newest tool in their arsenals for launching sophisticated attacks. In this article, we’ll explore several ways that generative AI can be weaponized by cybercriminals – and how it can also be used to stop them.

Taking social engineering to a new threat level

One way that cyber criminals are employing AI is creating more manipulative and successful phishing campaigns. Phishing is a common technique used for the initial compromise, to steal user or administrator credentials and gain access into corporate email mailboxes or publicly accessible systems. Phishing attacks that use generative AI have already been identified in the wild. One reason why this method works so well is that the AI can quickly gather open-source intelligence (OSINT) about a target organization and its employees via social media, corporate websites, and public databases.

The data may contain details about job roles, recent projects, or company goals, as well as personal and professional connections, and even personal interests. All of this data is then used for persona matching, which is key to convincing a target that the email (or malicious link) was sent from a trusted source.

Once an employee falls victim to the spear-phishing email, the cyber criminal gains a foothold within the organization. Normally, a threat actor searches the compromised email account for keywords to uncover ongoing financial transactions. Using AI, especially a large language model (LLM) like ChatGPT, this could be done faster and with greater precision by filtering only upcoming dates or larger expected transaction sizes.

Sadly, the email danger doesn’t stop at malicious inbound messages to unsuspecting people. Once a cyber criminal has access to a victim’s outbound email, the AI can be trained solely on the victim’s diction and cadence to create a very convincing, yet fraudulent email. These emails can be used to further propagate malware, compromise additional users’ credentials, or change payment instructions for large financial transactions. One security method we recommend for confirming large monetary transactions is voice verification via phone call. Training an AI to impersonate an executive’s voice could create a convincing replica and the realistic deep fake voice adds a layer of authenticity that increases the threat actor’s chances of success.

Increasing the exfiltration and malware dangers

Stealing credentials is just the tip of the spear (phishing pun intended, of course) for organizational vulnerabilities due to generative AI. Threat actors are also using AI to more efficiently create and send polymorphic malware, a malicious software that can alter behavior in response to security measures or change identifiable features to avoid detection. Pair these two effects, and a threat actor could have a neatly automated process for creating customized, malicious PDFs, as an example, related to an organizational goal, a company event, or even tailored to a specific person’s personal interests. This AI-driven recon provides contextual relevance, making the dangerous code more plausible and upping the chances of a successful encounter. An AI-developed malware that evades traditional signature-based detection systems will make it much harder for security solutions and teams to identify and mitigate that threat.

Threat actors could also deploy AI tools to scan and map an organization’s cloud-hosted applications to pinpoint where sensitive data is stored, such as cloud storage repositories, email threads, or attachments. Using automation to understand the data landscape means they can target the most valuable information without manually searching. These algorithms can be trained to recognize patterns and keywords associated with sensitive data, such as financial information, intellectual property, or customer data.

Similarly, by leveraging AI’s own capabilities to evade machine learning-based detection mechanisms, threat actors can train exfiltration tools against security systems, to exploit weaknesses in detection models. Having AI exfiltrate data by modifying network traffic (i.e., packets) to appear as normal business operations, during normal business hours, can reduce alerts of anomalous activity. If it is known that an organization has few security or IT personnel due to data collected via OSINT, exfiltration can be timed for overnight hours, further reducing the likelihood of successful containment.

Automation strengthens security posture

To combat many of these threats, organizations can utilize security information and event monitoring (SIEM) correlation. In this area, automation greatly enhances the efficiency and effectiveness of security monitoring by automatically analyzing and correlating large volumes of security events in real time. This makes alerting more efficient by filtering false positives and bringing actual malicious events to the top of the queue for an analyst’s review. Additionally, once an alert is confirmed as a true positive, analysts and engineers can create automated responses to triage and remediate the matter.

Another area of automation usefulness is vulnerability management. Vulnerability scanning, assessment, and remediation processes can be automated to help organizations stay on top of the security posture. As the number of vulnerabilities discovered every day increases, and vulnerability disclosure programs become mandated, automation can empower SOC teams to identify, prioritize, and patch vulnerabilities faster, reducing the window of opportunity for threat actors.

Finally, to improve threat intelligence and information sharing, automation can support the collection, analysis, and dissemination of threat intelligence, and provide organizations with real-time insights into emerging threats. By leveraging automation, security teams can proactively detect and respond to potential attacks, enhancing overall security posture. Bringing this full circle, integrating threat intel into a SIEM used for monitoring adds another element that is useful for identifying and alerting on anomalous or malicious events.