The Biden White House continues to seek a balance between AI innovation and safe ethical use with a new national security memorandum that stresses the need to outcompete rivals, but also sets limits in the most potentially abusive areas.
The national security memo instructs intelligence agencies and the Pentagon to ramp up their adoption of AI, but also warns that it must not be allowed to abridge free speech or democratic values. It also stresses the need to compete in military applications with China, but warns that it must be kept away from nuclear controls.
National security memo focuses on intelligence and military uses of AI
The memo notes that the pace of AI development is showing no signs of slowing down, and that it is largely taking place outside of government. That means necessary partnership with industry, civil society, and academia in the name of addressing national security, which in turn means changes to government organizational and informational infrastructure.
National Security Adviser Jake Sullivan called the memo the “first ever strategy for harnessing the power and managing the risks of AI to advance our national security,” and specifically named China as the chief rival building its own “technological ecosystem” that the US must create a superior alternative to.
However, Sullivan noted that this did not preclude discussion with China about curbing AI’s most serious risks. The two countries included this item in November 2023 talks between presidents Biden and Xi Jinping, where an oral agreement was reached to keep dialogue open on AI safety and risk issues. American AI experts have since met with Chinese officials in Geneva to continue this discussion, a meeting that Sullivan referred to as “productive.”
The memo notes that a “strategic surprise” by China is a central concern, however, and that the country is very active in integrating AI with its military capability. It stresses that the US must keep pace, but that AI systems should not yet be trusted with military action decisions (to include the “Skynet scenario” of allowing it possible paths to access nuclear weapons). It also calls for new restrictions on lethal autonomous weapons that ensure responsible use.
The memo also noted that the US is the current leader of the pack in AI development, and this means protecting private developers is a national security priority. Federal agencies have been tasked with assisting by sharing threat intelligence with field leaders like Microsoft and Google, and the government will review its own supply chains for security and potential diversity of chip sources.
AI use in war remains a point of international contention
The memo addresses three central areas in which the US must excel, one of those being counterintelligence to protect US developers from China’s spies. The other two are focusing on recruiting foreign AI talent, and ensuring that the massive energy and data center needs of tomorrow’s advanced AI systems are being built out today. However, little of that latter point can be directly addressed by executive order; the administration will need to go to Congress to open the purse strings, with further immediate action very unlikely given that it is on the way out the door.
National security agencies are being directed to take action to step up AI adoption and recruiting of talent, however. The national security memo authorizes them to reform hiring practices to improve AI talent acquisition, reform contracting practices to improve access for private sector AI firms, and instructs federal agencies to review their existing cybersecurity policies and procedures for ways to speed up AI adoption without taking on extra risk. The national security agencies will also all now need to designate a chief AI officer, and these officers will compose an AI National Security Coordination Group.
The US AI Safety Institute is also seeing an expanded role. In addition to designating it as the primary point of contact for private sector AI testing and evaluation, the national security memo tasks it with pursuing voluntary preliminary testing of at least two frontier AI models prior to their public deployment and issuing guidance for AI developers on risk management and evaluation within 180 days. It will also collaborate with the NSA’s new AI Security Center to develop “rapid systematic classified testing of AI models’ capacity to detect, generate, and/or exacerbate offensive cyber threats.”
Amidst all of this, the administration has also advised that AI cannot be used by government to track the free speech of citizens and its use must uphold civil liberties and human rights.
Jeff Le, VP of Global Government Affairs and Public Policy at SecurityScorecard, notes that there are many elements remaining that the government has yet to address: “Overall this memorandum is an important step forward and builds on significant progress to date. It is however important to note that governors, mayors, and other major state/local/tribal/territorial leaders should be involved beyond consultations on permitting and incentives. As subnational leaders are charting a path for legislative enactment themselves, more discussions should be filtered and collaborated with the AI Safety Institute as a means to advance U.S. AI superiority and meaningful stakeholder engagement.”
“It’s important to highlight that the U.S. investor community is involved in defense tech and AI. This should be expanded to relevant philanthropies and other vested family offices that have substantial technical resources and perspectives to bear. There should also be more to address the AI workforce, specifically to ensure more can be done to reduce superficial barriers, including degree requirements. Other models may attract more diverse and unique talent that the U.S. Government sorely lacks. Reform to include faster tracks beyond who is available for hiring and retaining technical talent will be important for maintaining U.S. superiority in the AI space,” added Le.
Cody Cornell, Chief Strategy Officer at Swimlane, notes that the success of these ambitious initiatives will likely hinge on proper funding: “The memorandum released yesterday aims to position the U.S. as a global leader in AI while promoting responsible and secure AI development. While this is a significant step forward, this approach presents a challenging balance between fostering innovation and ensuring safety and privacy. The push for robust security measures to address AI’s potential risks may inadvertently create fiction, potentially slowing down technological advancements. The U.S. aspires to be at the forefront of technological innovation, but achieving this while creating and enforcing stringent safety protocols presents a near-contradictory goal. The memorandum’s ambitious timelines underscore the urgency of the U.S. AI strategy, yet they also raise questions about the feasibility of these goals and where budget will stem from. AI initiatives require substantial investment, not only in the technology itself but in the workforce training across governmental agencies. Ensuring sufficient budget and resources for these initiatives is crucial to meet the outlined goals effectively.”
“Additionally, a cohesive global approach to AI governance is paramount. The current landscape reflects fragmented regulatory efforts across countries. Global collaboration on standardized AI guidelines across international boundaries is essential to manage shared risks and facilitate cross-border compliance for companies operating globally. Another key factor in the U.S. advancement of AI leadership is hiring the best technology experts. By championing international collaboration and recruiting the world’s best technology experts, the U.S. can strengthen its broader technological and economic goals,” noted Cornell.