If there’s one way to describe 2023, it would be the year of Artificial Intelligence. AI not only became a household phrase but also became a key consideration for organizations across practically every department. Businesses scrambled to find new ways to capitalize on the technology while also putting new parameters in place for its use among employees. We are truly just beginning to scratch the surface on the impact it will have.
In particular, the security industry was hit by an increasing number of AI-powered cyberattacks in 2023, and that is not going to slow down in 2024. As these attacks evolve as AI infiltrates every aspect of business, here’s what security leaders should resolve to do this year in order to better protect their organizations’ – and customers’ – data amid AI threats.
Protect against AI poisoning attacks by securing data
We’ve seen reports of the ways AI has been influencing the cyber risk landscape. In fact, the National Institute of Standards and Technology (NIST) recently identified new ways that cyberattacks can manipulate AI systems. However, this year, I predict that AI poisoning attacks will be the new software supply chain attacks – meaning it is critical for organizations to prepare now for these threats.
AI poisoning attacks will be characterized by threat actors targeting the ingress and egress data pipelines in order to manipulate data and poison AI models as well as the outputs they produce. With AI being used across a wide variety of business-critical workloads – potentially with very little oversight – maintaining the integrity of such systems needs to be a high priority. Small tweaks to AI inputs can change outputs dramatically – either immediately or slowly over a long period. The bottom line is: any data being fed to AI tools must be secured. Security leaders need to establish the provenance of data and use technologies like signing (analogous to code signing in the software supply chain space) to secure it.
Defend AI models’ code against attacks
In addition, we also need to protect the code that these AI models are built upon. This is essentially taking the approach of securing the software supply chain and applying it to AI models.
Today’s modern software development processes can often open up organizations to complex security threats. Unauthorized code and malicious software are two such examples that can introduce significant risk to organizations, and AI is making these threats even more intricate.
For example, there are a multitude of open-source models and data being published on Hugging Face. How do we establish identity and provenance of those models and the suitability of those for use in an enterprise?
Code signing is a perfect way to establish the identity and provenance of these models. This is a method of putting a digital signature on a program, file, image, [SS1] or executable, so that its authenticity and integrity can be verified upon installation and execution. Organizations should automate code signing workflows and ensure their keys never leave secure, encrypted storage.
Be vigilant when it comes to AI misinformation
2024 is a major election year for the U.S., U.K. and India. As these coincide with the mass adoption of generative AI, we are likely to see AI supercharging election interference in 2024 and potentially take election hacking to a whole new level.
From the creation of convincing deep fakes to an increase in targeted misinformation, the concept of trust, identity and democracy itself will be under the microscope. This will put even greater onus on individuals – as well as political, social and business leaders – to scrutinize and make informed decisions to root out false content. Doing so will help avoid the potential for damaging and far-reaching negative effects on the democratic structure of our society.
The promises – and threats – of AI are tremendous. If business leaders start preparing now, they will be in a better position to take advantage of the opportunities while better protecting organizations from its nefarious consequences. While we can’t predict the future, resolving to be more vigilant, and securing critical data and code (including models) will help organizations strengthen their cybersecurity posture.