Polygons forming a human brain showing AI executive order

Biden Admin’s Executive Order on AI Requires New Safeguards, Improved Transparency

A new executive order from the Biden administration addresses a wide range of the potential harms that AI can cause, putting new safeguards in place for everything from biological materials engineering to deepfakes.

Most of the executive order’s legally binding requirements are backed by the existing terms of the Defense Production Act, which allows the federal government to impose certain restrictions on private companies whose products may be used as weapons of war. Some elements of the order consist only of guidance, however, and some have been punted down the road to the development of a National Security Memorandum addressing requirements for the intelligence and military communities.

AI executive order covers fraud, terrorism and security by design

One of the executive order terms backed by the Defense Production Act is a requirement that the companies developing the “most powerful” AI systems notify the federal government of any training that they do, and share the results of red team (security response) tests with the appropriate agencies. This stipulation does not name particular companies but includes those that have AI models that pose a serious risk to public health, economic security or national security. The National Institute of Standards and Technology (NIST) will be creating testing standards to be applied to organizations in critical infrastructure, chemical, biological, radiological and nuclear sectors.

An AI Safety and Security Advisory Board (AISSB) is also being assembled by the Department of Homeland Security (DHS). Among other things, this body will work with the Department of Defense to plan a pilot program that will develop an AI capability that can fix vulnerabilities in critical US government networks. It will also be developing a program to assist AI developers in mitigating intellectual property theft risks, and will be tasked with putting together a benefits program to attract qualified AI developers from other countries.

Most of the other terms established by the executive order task assorted federal watchdog agencies with either establishing new tests and standards for the companies that they regulate, or establishing guidance for industries that they oversee. For example, the Department of Commerce will be developing a watermarking program to label AI-generated content. The federal government will use this tool to authenticate official releases of video and audio, and private companies will be encouraged to also use it to combat deepfakes.

Timothy Morris, Chief Security Advisor at Tanium, notes that while the federal government only has limited ability to directly order private companies to take action, it always plays a leading role in overall cybersecurity standards and its purse power often compels companies to make changes:  “Regulations are intended to protect consumers/civilians against a wide array of possible abuses. Using the federal government’s purchasing power can have heavy influence on any new technology. However, with any new innovations, regulations and red tape slow them down. The federal government can require agencies to perform evaluations of AI models to ensure they are safe and biases are limited or removed before a federal worker could use them. “Red-teaming” exercises are a type of evaluation that can be done to against AI and LLMs to accomplish this. I can imagine that all departments within federal government agencies can be affected. The Departments of Defense and Energy are key ones that could assess AI to bolster national cybersecurity.”

Oz Alashe, MBE of CEO of CybSafe, notes that largely unregulated AI use is already extremely common at companies across the country:  “President Biden’s executive order on AI signifies a watershed moment for national security and the international AI arena. The decision to mandate stringent evaluations of cutting-edge AI technologies before their use by federal agencies highlights both the transformative promise and lurking dangers of AI systems. CybSafe’s recent research on Generative AI’s cybersecurity risk underscores this, finding that over a third of office workers leverage AI tools for tasks, and an alarming 89% report potentially, inadvertently sharing sensitive information with AI tools. Generative AI, with its power to craft realistic narratives, can be especially dangerous when given sensitive and private information from companies. Over 60% of respondents from CybSafe’s research confessed difficulty in differentiating AI-generated text from human-authored content. This exposes them, and by extension, their organizations, to a new breed of AI-orchestrated cyber threats. More concerning, a majority of these users have indicated their companies have yet to educate them about such threats.”

Privacy, civil and consumer rights measures included

In terms of protecting the privacy of American citizens, the executive order calls for the establishment of a Research Coordination Network to develop tools that protect the integrity of cryptography. The National Science Foundation would be tasked with promoting the adoption of tools developed by this body to federal agencies.

The executive order also calls for improved privacy guidance for federal agencies that specifically addresses the procurement of information from private data brokers and accounting for AI risks to personal data. A report issued by the Office of the Director of National Intelligence in June found that numerous government agencies have ongoing contracts with these data brokers, and that there was potential for this pool of intelligence to cause harm.

Equity and civil rights issues have also been a major concern since decision-making algorithms were introduced, and the executive order addresses this area with the establishment of technical training for the Department of Justice and federal civil rights bodies to improve ability to investigate AI-related complaints. The order also calls for “clear guidance” for landlords, federal contractors and the justice system in how AI algorithms are used for various functions.

In terms of more general consumer protections, the Department of Health and Human Services will establish a safety program to address unsafe health care practices that involve AI. The Department of Labor is also tasked with preparing a report on AI’s potential impact on the workforce, and identifying options for workers facing disruptions to their jobs.

But the executive order also opens with talk about “seizing the promise” of AI, and to that end it calls for measures meant to support AI startups and provide technical assistance to smaller businesses. The center of this will be a pilot run of a National AI Research Resource system designed to provide data and resources to AI researchers and students and also act as a source for accessing AI-related government grants.

Marcus Fowler, CEO of Darktrace Federal, provides some direct industry insight in this particular area:  “In our decade of experience applying AI to the challenge of cybersecurity, we’ve seen first-hand the significant benefits that AI offers. It can uplift people and make their work and lives faster, easier, more secure and more efficient. As we increasingly rely on these tools, it is even more vital that they are secured properly. A compromise could negatively impact public trust in AI and derail its potential. An attacker gaining control of an AI system could have serious consequences to business, infrastructure and our personal lives. We’re already seeing indicators of security challenges posed by general purpose AI. It is lowering the barriers for attackers and making them faster; attackers are breaking general purpose AI tools to corrupt their outputs; and accidental insider threats can put IP or sensitive data at risk.”

The Biden administration has noted that the federal government was too slow to address the harms of social media, and seeks to get things right with AI by moving at a faster pace than normal and staying on top of these systems as they develop. The administration has expressed particular concern about the spread of false information via deepfakes, the supercharging of scams and criminal hacking, and worsening social and racial inequality if decision-making systems are misapplied.

The terms of the executive order that are enforceable will roll out over the next 90 days to a year, but Congress will have to play a role in putting some of these measures in place. Stuart Wells, CTO of Jumio, notes that these threats exist today and that organizations must have their own plans in place to respond to them: “In light of this growing threat, organizations must elevate the protection of their users. This can be accomplished by developing and implementing standards and best practices for detecting AI-generated content and authenticating official content and user identities. Which can be done through tactics such as deploying biometrics-based authentication methods, including fingerprint or facial recognition, and conducting continuous content authenticity checks. Organizations must act now to protect their users from increasingly sophisticated AI-enabled fraud and deception methods. Enhancing identity verification tactics is essential to mitigate this risk.”

In terms of additions and future developments, Dave Gerry (CEO at Bugcrowd), would like to see a reporting system for AI flaws similar to CVSS scores: “Safety and privacy must continue to be a top concern for any tech company, regardless of whether it is AI focused or not. When it comes to AI, ensuring that the model has the necessary safeguards, feedback loop, and most importantly, mechanism for highlighting safety concerns is critical.  As organizations rapidly adopt AI for all of the efficiency, productivity and democratization of data benefits, it’s important to ensure that as concerns are identified, there is a reporting mechanism to surface those in the same way a security vulnerability would be identified and reported.”

Michael Leach, Compliance Manager for Forcepoint, notes that this rapid movement on AI may also prompt Congress to finally take serious action on establishing federal data privacy law, something that states already have a years-long head start on: “The emphasis in the Executive Order that is placed on the safeguarding of personal data when using AI is just another example of the importance that the government has placed on protecting American’s privacy with the advent of new technologies like AI. Since the introduction of global privacy laws like the EU GDPR, we have seen numerous U.S. state level privacy laws come into effect across the nation to protect American’s privacy and many of these existing laws have recently adopted additional requirements when using AI in relation to personal data. The various U.S. state privacy laws that incorporate requirements when using AI and personal data together (e.g., training, customizing, data collection, processing, etc.) generally require the following: the right for individual consumers to opt-out profiling and automated-decision making, data protection assessments for certain targeted advertising and profiling use cases, and limited data retention, sharing, and use of sensitive personal information when using AI. The new Executive Order will hopefully lead to the establishment of more cohesive privacy and AI laws that will assist in overcoming the fractured framework of the numerous, current state privacy laws with newly added AI requirements. The establishment of consistent national AI and privacy laws will allow U.S. companies and the government to rapidly develop, test, release and adopt new AI technologies and become more competitive globally while putting in place the necessary guardrails for the safe and reliable use of AI.”