Lock and AI chip showing AI security guidelines

US, UK Lead 18 Nations in Adoption of AI Security Guidelines

The United States and United Kingdom have rolled out bilateral AI security guidelines, with input from 21 global agencies and the agreement of 18 nations. The principles range from security by design at the manufacturing level to ongoing maintenance and updates to prevent cyber attacks.

The “Guidelines for Secure AI System Development” effort was led by the US Cybersecurity Infrastructure and Security Agency (CISA) and the UK’s  National Cyber Security Centre (NCSC) but also involved input from Australia, Canada, Chile, Israel, Japan, New Zealand, Nigeria, Singapore, South Korea and much of the EU. Major technology outfits in the field such as Google, Microsoft and OpenAI also made contributions. The AI security guidelines break down known risks that algorithms pose and establish protocols in four key areas: secure design, secure development, secure deployment, and secure operation and maintenance.

US & UK national guidelines align closely with existing CISA, NSCS and NIST guidelines

The AI security guidelines, which are entirely voluntary at this point, are aimed at providers of AI models or those using API interfaces. The guidance offers a general overview of expected risks and threats from the initial design process, through the development life cycle and deployment, and all the way through ongoing operation and maintenance after deployment.

The document published by CISA and NCSC specifically notes that security can become an afterthought when the “pace of development is high,” something that existing AI models are already taking a lot of heat for. The AI security guidelines call not just for developer ownership of security in all phases but for “radical transparency and accountability” to promote public trust in this emerging tech (which promises lucrative hauls for the companies and nations that can get out and stay out in front).

The AI security guidelines draw on the existing NCSC secure deployment and development guidance documents and the NIST’s Secure Software Development Framework (SSDF) to a substantial degree. Security by design is the first point that is stressed, which includes modeling threats to the system and considering security benefits and trade-offs when selecting an AI model. Specific design elements include performing due diligence evaluations for any external libraries used and for the security postures of any external model providers.

The “secure development” aspect of the AI security guidelines stresses something that is an ongoing issue in all aspects of online business: supply chain security. The document points to the existing NCSC Supply Chain Guidance as a reference for evaluating partners, and also emphasizes the importance of creating catalogs of AI-related assets. The document also touches on the concept of “technical debt,” or collections of engineering decisions that forego long-term benefits to address some sort of short-term need.

Deployment advice stresses incident management procedures and responsible release of new tools, and the operations and maintenance portion mostly covers ongoing monitoring. It also calls for the establishment of information-sharing communities for sharing lessons learned, potentially with the publication of regular bulletins.

AI security guidelines highlight unique and emerging risks

Though 18 countries signed on to a pledge to use the AI security guidelines as a reference going forward, there is nothing binding in them and they do not address any of the potential legal issues involving how machine learning tools gather their training data. The agreement at least serves as a launchpoint and an acknowledgment of AI risk, the consequences of which are already being felt in a variety of ways.

Aside from the possibility of lawsuits due to chatbot scraping and regurgitation of protected information, ChatGPT and similar have already highlighted courtroom issues. In June, two lawyers with New York’s Levidow law firm were sanctioned for submitting a ChatGPT-generated brief that contained entirely made-up citations and quotes that the chatbot had apparently hallucinated.

There is also the issue of exactly how chatbots are storing and using the volumes of personal information that are poured into them, some of which is protected by existing regulations. Discrimination and bias are also major concerns after some of the bigger and more popular machine learning tools have had repeated incidents of replying with racist or bigoted information, something that has sent developers scrambling to implement guardrails.

In addition to avoiding the question of intellectual property rights, the AI security guidelines also do not spend much time on open source software issues, as Chris Hughes (chief security advisor at Endor Labs and Cyber Innovation Fellow at CISA) notes: “This document represents a sound and concise set of guidance for organizations developing and using AI systems. It’s certainly helpful, as it cites best practices and sound recommendations across design, development and operations. It also provides readers with citations to additional in-depth resources and guidance … The guidance mentions open source software (OSS), when discussing supply chain security and acquiring well secured and documented components from external sources, such as OSS and third-party developers. It also includes commercial entities in the mix. This is a recognition of the potential threats from OSS related to AI development and use, and the risks posed by external libraries and components that can lead to a compromise. Organizations must ensure the OSS components, libraries and AI models they use are trustworthy, properly governed and secured. The guidance could have gone further on OSS inventory, scanning, governance and security, but overall it speaks to risk from a high-level perspective, regularly citing external and additional guidance for more depth.”

Joseph Thacker, security researcher with AppOmni, is also generally positive about the AI security guidelines but notes several additional shortcomings: “Overall, I love the focus on AI and security. It appears this guidance is trying to be more specific, but it’s still pretty vague in applications of practical actions. It will still be helpful for organizations that don’t know where to start, but isn’t very helpful for enterprises and other organizations that already have decent security … Nearly every AI product is SaaS-based. These apps often handle sensitive data and can be a prime target for cyber attacks, so securing them is extremely important, but that requires an app-centric approach. The Guidelines for Secure AI System Development don’t mention misconfigurations at all. On top of that, there are a multitude of new attacks and the Guidelines don’t get into the details of those. It touches on prompt injection only once, and doesn’t mention all the risks associated with giving LLMs access to tools and plugins.”

And Troy Batterberry, CEO and founder of EchoMark, would like to see more on insider threats: “While logging and monitoring insider activities are important, we know they do not go nearly far enough to prevent insider leaks. Highly damaging leaks continue to happen at well-run government and commercial organizations all over the world, even with sophisticated monitoring activities in place. The leaker (insider) simply feels they can hide in the anonymity of the group and never be caught. An entirely new approach is required to help change human behavior. Information watermarking is one such technology that can help keep private information private.”

Regulations that actually punish AI developers for misdeeds fall to individual nations, most of which have been hesitant about making decisive moves thus far. The EU is generally the farthest along with its AI regulations thus far, with the EU Artificial Intelligence Act in the late stages of discussion and likely to be passed in the coming months. In the US, the Biden administration has expressed interest in regulating AI but Congress has yet to signal serious intent to take up the issue.