Vice President Kamala Harris has announced that the White House Office of Management and Budget (OMB) is issuing a new set of AI rules to be followed by all federal agencies. The new directives stem from the Biden administration’s October executive order on AI, and as with that prior action the agencies will be given time limits (ranging from May to December) to achieve compliance.
Two of the biggest components of the new AI rules are the implementation of mandatory AI safeguards by federal agencies, such as those used in healthcare and financial benefit systems, and expanded transparency requirements that will better inform the public of risks and in some cases provide access to code and raw data when it does not pose a security problem. The order also seeks to grow the federal AI workforce, with the lead item in this area being a requirement for agencies to appoint a Chief AI Officer.
AI rules seek to balance safety with freedom to innovate
Every country is walking a tightrope in wanting to be among the leaders of AI development, but also recognizing the incredible potential for damage inherent in the technology and the need for immediate and strong regulation. The OMB AI rules address this with a framework of “responsible innovation” that, at least in terms of use by federal agencies, seems to prioritize projects that tackle social problems like disease and natural disaster response.
In terms of placing restrictions on AI, federal agencies will be required to implement a set of safeguards by December 2024 that are aimed at addressing public safety concerns. This includes ongoing testing and monitoring of systems that interact with the public, like the TSA’s airport security screening and federal healthcare systems like Medicare. The biggest individual takeaway from this portion of the AI rules is that if these systems cannot be safeguarded appropriately they will have to be taken out of commission (with limited exception for critical agency functions or an overall increased safety risk presented by their removal).
Transparency is also a focus of the AI rules. Federal agencies will be ordered to publish annual inventories of their AI use cases, most of these available to the public unless there is a security concern (in which case more generalized metrics will be available as a substitute). AI code, models, and data owned by the government will also be made available to the public when it does not pose a similar security risk.
The administration is also looking to grow the AI workforce, with a focus on expanding AI governance. Federal agencies are being advised to expand and upskill their AI workforce, under new Office of Personnel Management guidance that improves terms for AI roles. The Biden administration has also committed to hiring 100 AI professionals by the summer of 2024 to promote safe and trustworthy use of AI.
The federal agencies have also been instructed to appoint Chief AI Officers to coordinate AI use. The AI rules also require agencies to convene AI Governance Boards that assist in this task, with each of the 24 CFO Act agencies required to do this by May 2024. Several, such as the Departments of Defense and Veterans Affairs, already have a board in place.
Federal agencies await more detail about AI safeguard requirements
For the most part the AI rules were not unexpected, first appearing in draft form with the October AI executive order and having spent some time in a public comment period. But many questions about technical details have yet to be answered by further guidance that assorted federal agencies have been tasked with developing.
For example, the impacted federal agencies do not yet know exactly what requirements for audits or for purchasing AI solutions from third-party vendors will look like. Federal bureaucracy is certainly not known for rapid movement and quick adoption of cutting-edge tech, and the administration’s summer hiring blitz appears to be at least in part an attempt to facilitate these changes.
Joseph Thacker, principal AI engineer and security researcher at AppOmni, thinks that some elements of the OMB rules can be implemented by their 2024 deadlines, but some are unrealistic for federal agencies: “Drafting their policies and creating a plan is definitely doable by the December deadline. However, the requirements to perform an AI impact assessment and test AI for performance in a real-world context may be challenging to complete by this deadline. You’ll need a production implementation for these, and many agencies are likely to struggle to get a full (or even small) implementation out by this timeframe. It’s extremely important that OMB is encouraging agencies to expand their usage of AI. Often, government agencies are slow at implementing and learning about new technologies – this will force them to learn AI much faster because the best way to learn is by using it. Implementing AI will also dramatically increase their ability to regulate it.”
Narayana Pappu, CEO at Zendata, foresees particular problems for federal agencies that must deal with potential bias in their systems: “The rules for AI are very similar to privacy regulations like GDPR in EU. AI bias and transparency problem is a data governance problem. If feed AI biased data you have biased results and if you don’t have governance in place for mission critical systems (things like shadow IT) again you have biased results and lack of transparency. Two main things the laws are trying to address. I recommend back testing on the historical data, experts to conduct manual evaluation of the results, and necessary adjustments/list of exclusions when AI should not be used.”
The new AI rules follow previous action by the administration to regulate large AI models, and its commitment to a joint international declaration that begins dialogue about guardrails and military applications. The EU is in the late stages of finalizing and implementing its AI Act, the first comprehensive set of AI regulations of its type in the world, and China is likely to implement a similar set of rules in the coming months.
Marcus Fowler, CEO of Darktrace Federal, adds: “While establishing AI leadership is an important step in ensuring the safe use of AI technologies– and there are existing frameworks around secure AI system development provided by CISA and the UK NCSC– these efforts and resources are not the only thing organizations can do to adequately encourage the safe use of generative AI technologies. In order to ensure the safe and effective deployment of these tools in their workplaces, it is vital that AI officers and their associated teams have a firm understanding of “normal” behavior across their networks and IT environments and take part in a dedicated effort to educate their broader organizations with these findings. Through this approach, AI executives and their teams can ensure their broader organizations are equipped with a general understanding of the use cases and risks associated with leveraging AI tools, how these issues relate back to their roles and areas of business specifically, and best practices for mitigating business risk.”
“There are three areas of AI implementations that governments and companies should prioritize: data privacy, control and trust-but it’s vital that organizations remember that each of these areas require significant influence from leaders to remain effective. In addition to leveraging industry standards and appointing key leadership teams tasked with ensuring the effective use of these technologies, it’s critical that organizations also establish trust in these roles across their companies by highlighting the value that AI-focused roles bring to the broader organization. This will help to ensure that each and every team member is familiar and comfortable with the internal resources available to them – encouraging stronger collaboration between teams in tandem with the supervised use of these tools, ultimately strengthening an organization’s broader security posture,” added Fowler.