Virtual shield showing generative AI and cyber resilience

Gen AI: A Shield for Improved Cyber Resilience

Due to their increased popularity and usage among threat actors, Generative AI (GenAI) cyber attacks have created a profound worry for organizations across all industries. With the widespread availability of ChatGPT and other GenAI applications today, research shows a 135% increase in ‘novel social engineering’ attacks in January and February of 2023 alone.

With this alarming jump in GenAI-related cyber attacks, it’s evident that companies need to prioritize defending against yet another type of cyberattack. Luckily, organizations can benefit from having GenAI on their side. By learning how to properly leverage these tools for effective defense, companies can mitigate the threat of rapidly evolving AI-powered attacks.

Vetting AI models for security success

Before implementing GenAI as a proper defense tool, teams and leaders need to understand the strengths and weaknesses of GenAI. Proper research and education on this topic will ensure accurate security procedures fortifying the appropriate tool for the corresponding task. An easy way to understand the benefits of a certain AI tool is by surveying its AI model card (sometimes known as a “system card”), which ultimately provides users with knowledge about its benefits and advantages, what it has and has not been tested for, and its flaws and vulnerabilities.

Vetting AI models is a vital step, and model provenance should be the first step of any and all defense strategies. Biden’s latest executive order about AI reinforces the importance of vetting AI models, requiring all AI models to be red-teamed to suss out potential weaknesses. Model provenance provides all documented history such as the AI model origin, the architecture and parameters it possesses, dependencies it may bear, the data used to train it, and other corresponding details. By recognizing and discovering these details, technologists can understand with confidence which models and sources are trustworthy in building a strong defense.

Two examples of AI models cybersecurity experts use are Generative Adversarial Networks (GANs) and Variational Autoencoders (VAEs). GANs can be used to create realistic synthetic data which allows one to anticipate the moves of digital adversaries before they attack. VAEs are data collectors that dive deep into data patterns looking to uncover any lurking danger. It’s no surprise that the best defense strategies against GenAI cyber attacks can actually include GenAI. With the use of AI tools like GANs and VAEs, companies can anticipate, defend, and protect against attacks at a higher level than previously possible.

Ethical guideline implementation for organizational success

It’s necessary to preemptively bake in specific guidelines on GenAI tools, such as ChatGPT, within an organization in order to set organization-wide expectations. These ethical guidelines and policies clearly instruct employees on how to use these tools and what methods they must follow for reporting violations.

Organizations need to first determine when and how GenAI tools will benefit them the most. While outright banning GenAI may seem tempting to some, this could lead to unregulated “shadow-use” scenarios providing potential exploitation by attackers.

To alleviate this type of risk, write a straightforward policy that focuses on emphasizing the protection of personal information and verifying the accuracy of the information leveraged by the AI tool. The basis of building a great GenAI usage policy is to keep it as simple and streamlined as possible. The more complexity, the more likely employees are to disregard the policy and misuse the tool.

An example of information that should not be shared with GenAI would be company IP, full names, phone numbers, health records, and any other confidential information. It’s a good idea to program external tools (like ChatGPT) to disable history when dealing with sensitive or private conversations. Some guides for writing GenAI policies encourage the nitty-gritty details of AI use, such as company and employee responsibilities, company rationale on acceptable use, escalation pathways, and more. However, the more detailed the plan is, the higher the disconnect will be between the company and the employee, creating a less useful policy.

GenAI solutions are increasingly aware of these concerns and have enterprise workspaces that will not share private information. It’s important to partner with your security compliance and risk team to understand if these solutions will comply with your data privacy requirements.

Knowledge is power when building cyber resilience

Knowledge is one of the most powerful defense tactics within the realm of cybersecurity. A well-informed team creates an impactful shield against many technological advancements, which only proves how vital it is to equip employees with awareness of the AI landscape. This fosters informed decisions and contributes to a culture of cybersecurity perception, nudging organizations along the path of resilience against potential exploitation.

Proactive measures to educate staff on AI’s power and associated risks help enhance any organization’s security posture. These measures also enable a swift and easy response to unauthorized or suspicious behavior without a second thought.

GenAI cyber attacks continue to rise at an alarming rate, creating an undeniable risk of exploitation throughout all industries. It’s imperative that business leaders see this pressing need to fortify defenses against this evolving threat of attacks. With proper vetting processes, policies and education on ethics and usage, and a robust threat detection and response plan, organizations can fight fire with fire, leveraging GenAI to defend against the cyber attacks of today and tomorrow.