Virtual screen showing AI risks and executive order

Biden Executive Order on AI Risks Revoked by Trump

Donald Trump has been signing a flurry of executive orders in his first days in office, but he has also repealed some of those issued by his predecessor. Among these is the Biden administration’s 2023 order directing developers to share information on anticipated AI risks with the government prior to releasing their products to the public.

The Biden administration had invoked the Defense Production Act as the basis for the order, citing potential AI risks to national security and public interests. Trump had made repealing the order a part of his policy during the presidential election, though likely one that went unnoticed by the general public amidst numerous other issues.

2023 executive order on AI risks rescinded in the name of innovation

Biden’s Executive Order 14110 of Oct. 30, 2023 (Safe, Secure and Trustworthy Development and Use of Artificial Intelligence) was aimed at developers of “the most powerful” AI systems. It required reporting of any potential AI risks to “national security, national economic security, or national public health and safety” and for these organizations to share the results of their red-team safety tests with federal agencies.

The Trump executive order frames the move as a boost to America’s competitiveness in the AI race, and an economic shot in the arm. While Trump has said that he generally plans to loosen AI restrictions during his current term, his prior administration issued an executive order on AI risks just prior to transferring power to the Biden administration. That order focused on public trust, requiring federal agencies to design and use AI in a way that protects privacy and individual rights.  The Biden order that was just repealed overlaps with the prior Trump order in part in that it further specified that federal agencies create guidelines for the responsible use of AI.

AI risks may fall by wayside as global competition heats up

The US tech industry is undergoing something of a political pole shift, as figureheads like Jeff Bezos and Mark Zuckerberg publicly cozy up to Trump after years of weathering accusations of anti-Republican and anti-conservative political bias on their platforms. That is quite likely due in no small part to the administration’s position on AI development. And concerns about AI risks may be shifting to the background with the recent news of China’s Hangzhou-based DeepSeek R1 able to keep pace with the current top players while requiring substantially less in the way of computing resources to run.

In that vein, the US has announced the $500 billion “Stargate Project” aimed at achieving dominance over the global AI landscape. The project draws together OpenAI, Softbank and Oracle and backs their research with substantial government investment running through the year 2029. 10 data centers meant to support the project are under construction in Texas, with more planned for the future in other states.

A new report on AI risks from the World Economic Forum warns that innovation cannot be the central point of focus, however. There are very real near-term threats from the use of AI in military applications, misinformation, and reliance on systems for decision-making that end up introducing bias. It also remains difficult to do specific risk analysis because so many of these systems remain opaque to the outside world, with very minimal information shared with the public about their internal workings. The report characterized the overall risk landscape as “bleak” with AI seen as a significant near-term contributor to the leading threat concern, state-based armed conflict.

AI risks are also contributing to a significant ongoing “trust gap” among consumers. While new models are rolling out that offer enhanced capabilities, users remain wary of the problems with current models that continue to crop up: hallucinations, confident reporting of inaccurate information or intentional misinformation, ethical issues, and the general fear that the technology is going to continue culling jobs. There is still great wariness about adopting it for use in making critical decisions, such as medical diagnoses, even as Larry Ellison promises that the technology will make cancer vaccines available within our lifetimes. That is not just a matter of the mistakes it makes and concerns about bias, but also again about the opacity of these systems and inability to examine their internal workings.

Gabrielle Hempel, Customer Solutions Engineer at Exabeam, expands on the risks posed when innovation begins to outrun regulation: “As someone who cut their cybersecurity teeth in the world of medical device vulnerabilities, I have seen firsthand how technological innovation can outpace regulation, leaving organizations and individuals vulnerable. Without a robust framework for governance, we risk the misuse of AI, from weaponized deepfakes to systemic biases that amplify inequalities. AI can be a tool for incredible progress, but only if we set clear guardrails that prioritize ethical development, security, and accountability. The stakes are high not just for corporations but for national and global security. Thoughtful regulation isn’t about stifling innovation but ensuring AI systems work for us, not against us. This debate isn’t about politics; it’s about the future we are building. Whether in cybersecurity, legal compliance, product development, or technology strategy, we all have a role in advocating for responsible AI practices. We should be pushing for policies that balance innovation with safety and protect the integrity of our systems and societies.”

Trump spent January 23 signing more executive orders, including headline-grabbing items like the release of historical information of great interest to conspiracy theorists. Amidst this news the president also signed an order directing the creation of an Artificial Intelligence Action Plan, with related agencies being given 180 days to identify more Biden administration policies and regulations to be removed while charting a path to US “global AI dominance.”