Hand holding up AI icons showing new AI guidelines announced by the White House
White House Lays Down New AI Guidelines to Govern AI Applications and Encourage Innovation

White House Lays Down New AI Guidelines to Govern AI Applications and Encourage Innovation

One year after issuing the executive order named the “American AI Initiative,” the Trump administration has announced a set of AI guidelines as the basis of regulation in artificial intelligence. The regulatory principles aim to prevent overreach by the regulatory authorities and allow innovation in the field. The AI guidelines include 10 principles that federal agencies should consider when regulating the use of Artificial intelligence in both the public and private sectors.

The core principles of the new AI guidelines

The Office of Science and Technology Policy indicated that federal agencies should implement AI regulations that depend on fairness, non-discrimination, openness, transparency, safety, and security. The agency also stressed that the rules must prioritize the risk assessment and cost-benefit analyses as well as be backed by incorporate scientific evidence and feedback from the American public.

Whitehouse hands-off approach to AI

The administration has indicated that it has a greater interest in prioritizing the use of AI in making the United States a leader in the field. The president discouraged European and other United States allies from applying draconian legislation in AI regulation thus killing AI innovation and growth. He encouraged them to adopt a similar regulatory approach as the United States.

The Whitehouse warned federal agencies against taking precautionary approaches that prevent society from enjoying the benefits of AI. Lynne Parker, the U.S. deputy chief technology officer at the White House’s Office of Science and Technology Policy said that the Trump administration wanted to avoid top-down, one-one-size-fits-all blanket AI regulations.

The threat posed by AI systems

Experts have warned that without proper regulation, the AI systems pose a great danger to society. The application of Artificial Intelligence in areas such as recruitment, healthcare, self-driving cars, credit risk, stock investment, and facial recognition has raised questions about the trustworthiness of the systems.

Experts are afraid that AI could perpetuate similar prejudices such as racism and sexism that already exist in society. This is because the AI tools are a result of flawed and biased data, which represents various societal stereotypes regarding race, gender, and other attributes. For example, the Amazon resume-screening algorithm became biased against women applicants because it trained on resumes from mostly men. Similarly, a scientific data-mining English language-learning algorithm ended up becoming racist.

AI ethicists are also worried that artificial intelligence could become a tool to infringe on the civil liberties of the citizens by a government that conducts surveillance of its citizens. Experts have therefore raised concerns of possible authoritarian use of AI systems by the government. The United States government has consistently condemned the use of AI by the Chinese government to persecute its people and censor dissent. Rights groups are worried that similar events could take place in the United States if proper precautions are not in place.

The current state of AI regulations

Currently, AI innovations have outpaced AI regulations across the world. However, some states such as California have banned the use of facial recognition with AI technology by government departments and law enforcement agencies. The defense and national security communities have also come up with their own AI guidelines while the United States government ratified a set of AI principles with 40 other countries. The European Commission’s High-Level Expert Group on Artificial Intelligence came up with its own set of AI guidelines, which are more comprehensive and independent of the AI community.

Reaction from the AI developer community

The United States Chief Technology Officer Michael Kratsios indicated that the regulations were necessary to reduce uncertainty in the AI developer community. Ms. Terah Lyons the Executive Director of Partnership on AI, which receives support from major tech firms, said that the AI developer community should see the new AI guidelines as a positive step. However, the former Obama administration technology officer said that it was still difficult to assess the impact of the new AI guidelines. Similarly, New York University’s AI Now Institute welcomed the new guidelines but indicated that it was still assessing the effectiveness of the new principles.

Conclusion

While the AI guidelines act as a new basis for future AI regulations, they still lack a definitive answer to AI regulation. This provides a possible avenue for misuse of the technology by rogue agents. Because of the complexity of the field, it is difficult to strike a balance between proper regulation and supporting innovation. Consequently, AI technology will continue to operate in a gray area until governments figure out how to regulate the technology effectively.