The White House’s new “Blueprint for an AI Bill of Rights” is not any kind of a legally binding measure, but it could be the first step toward wide-scale federal-level regulation of AI.
A similar route was followed in the EU in recent years, with the European Commission creating a voluntary system of AI governance principles roughly three years ahead of drafting a bill that would make many of those guidelines legally enforceable. However, the EU also has a smoother road in this regard given the prior adoption of the General Data Protection Regulation (GDPR) setting related cybersecurity, governance and legal remedy terms in place at the national level.
White House AI bill of rights calls for ban on discrimination, data security by design
The AI bill of rights proposal opens by noting that the use of AI in automated decision-making is already causing a variety of problems in areas such as credit decisions, hiring and even patient care. It also calls out “unchecked” social media data collection, with AI-driven platforms sometimes scraping all available public resources (Clearview AI being the most notorious example).
To this end, the AI bill of rights stipulates five guiding principles meant to govern design and deployment: system safety, protection from discrimination, data privacy, notice and explanation, and human alternatives. The White House has created a downloadable handbook, “From Principles to Practice,” meant to further inform organizations that are interested in getting ahead of the curve in terms of implementation.
The system safety portion of the AI bill of rights calls for consultation from a variety of stakeholders and domain experts, pre-deployment testing, ongoing monitoring and established mitigation measures. The language of this section specifically names “unintended but foreseeable” results of AI system use as something that organizations should be obligated to protect users and data subjects from. It names independent reporting and evaluation as a measure to ensure safety on an ongoing basis, and public reporting of safety testing results and the security measures that are deployed.
Algorithmic discrimination is also addressed by the AI bill of rights. This is an issue that has already surfaced a number of times (and in a number of fields) since 2016, when a piece of software called COMPAS that was used by several different states was found to be biased in its assessments of potential recidivism risk of defendants awaiting trial. This section names the usual categories of especially sensitive personal information that tend to have special protection rules applied to them, such as ethnicity, religion and genetic information. The bill calls for these protections to be embedded with the design process, and it names proactive equity assessments and accessibility for people with disabilities as some specific elements.
Data privacy is another area in which the AI bill of rights calls for security-by-design principles. Some of the default protections it calls for include data minimization principles that limit collected information to the specific context of the decision or function, clear interfaces that do not use “dark patterns” or similar methods to steer user decisions, and consent requests that are equally simple to understand and give users agency over their data. This segment also calls for data transparency and the ability of data subjects to access, modify and delete their own data. It also calls for the ban of “continuous” surveillance monitoring of data subjects from education, workplaces and housing.
AI bill of rights requires transparency in automated decision making and right to opt out
The “notices and explanation” section builds on some of these data privacy accessibility proposals. It calls for clear explanations of decisions that these systems make that impact data subjects, public notices that an AI system is in use, and plain language reports on the operations of AI systems.
The “Human Alternatives, Consideration, and Fallback” portion of the AI bill of rights calls for a right to opt out of systems “where appropriate” and a right to access to a human representative who can address problems that arise. It also calls for guaranteed “human alternatives” to some of these AI functions and purpose-based systems for certain areas and industries that could be particularly subject to abuse (such as criminal justice, health and employment).
Policy analysts generally see this as just the beginning of an eventual push toward AI regulation, but a solid start in terms of consumer and data subject rights. However, it remains an entirely voluntary proposal and could struggle to move toward adoption as a legal standard as long as broader data privacy issues remain unaddressed by legislation at the federal level. One major area that the AI bill of rights does not yet broach at all is the circumstances under which AI technology might be banned if it is deemed too potentially harmful to the public.