Face recognition technology used on street showing new artificial intelligence regulation and calls for facial recognition ban

EU Proposes Heavy Regulation of “High Risk” Artificial Intelligence as Activists Call for Facial Recognition Ban

EU officials are considering wide-ranging regulation that would include heavy restrictions on a range of “high risk” AI applications as well as facial recognition systems used by law enforcement. A leaked document also indicates that a facial recognition ban for cases of “indiscriminate” and “generalized” mass surveillance is also being considered, but privacy watchdogs in the region would like to see things taken a step further and have the technology made entirely unavailable to the police.

Proposed “social credit,” facial recognition ban aimed at China model & routine law enforcement use

The 81-page “Artificial Intelligence Act” proposal focuses on “fast-evolving” AI models that might cause harm to EU rights and interests. Though the paper does not mention the Chinese government by name, it addresses a ban on “social scoring” in a manner similar to the “social credit” system that country has implemented to restrict use of public transportation and to track the movements of the minority Uighur population. It also proposes a facial recognition ban as applicable to mass surveillance: “indiscriminate surveillance of natural persons should be prohibited when applied in a generalized manner to all persons without differentiation.”

The proposed AI regulations go beyond this, however, addressing a wide range of potentially “high risk” systems. An AI application categorized as “high risk” would be subject to special inspections, including examination of how its data sets are trained. These would include financial applications, college admissions, employment and critical infrastructure among other examples. Some categories might face an outright ban if deemed to be an “unacceptable risk”; examples cited here include “manipulating behavior to circumvent free will,” “targeting vulnerable groups” and using “subliminal techniques.” The risk level of an application would be determined by specific criteria including intended purpose, the number of people potentially affected and how irreversible the potential harm might be. The majority of AI applications, those that use relatively simple rule-based systems (such as chatbots and video games), would be considered low enough risk to not be subject to these regulations.

The facial recognition ban would prevent law enforcement from using real-time facial recognition in public spaces for routine duties, but would not take it off the table entirely. It establishes categories of “serious” exceptions such as terrorism investigations, finding missing children and public safety emergencies.

Some EU privacy watchdogs responded to the news of the facial recognition ban by declaring that it did not go far enough. The European Data Protection Supervisor (EDPS), the independent body that monitors organizational processing of personal data, called biometric identification “non-democratic” and characterized it as a serious intrusion into people’s lives. Other privacy advocates, such as Brussels-based European Digital Rights (EDRi), also support a total facial recognition ban and feel that there are too many loopholes through which to declare investigations as “serious crimes” and flout the spirit of the proposed new law. Research conducted by EDRi has found that police in Germany have used mass facial recognition indiscriminately at G20 protests, and a number of other countries have used it in public spaces for purposes such as detecting “loitering” and scanning crowds at sports venues.

High risk AI faces stronger standards, stiff penalties

The risky AI categories would be subject to new standards and regular supervision by officials, and companies could be hit with steeper fines than are presently allowed under the General Data Protection Regulation (GDPR): a maximum of 6% of annual global turnover.

The EU would be the first region of the world to adopt strong AI regulation of this nature, but the effects could be global. Tech giants based in other countries, most notably the Silicon Valley companies, might have to make changes to operations and software to stay within the purview of EU law. It may not be feasible or cost-effective to implement these changes solely in the EU.

The AI legislation would apply to both developers and users of these systems. Some specific requirements for developers enumerated in the report include a mandatory conformity assessment before AI systems can go into public use, standards for the quality of data sets and the tracing and reporting of results, and oversight by national market surveillance authorities. A new international board of regulators, the European Artificial Intelligence Board, would be established to ensure that implementation and enforcement are harmonized across the whole of the EU.

Though some in big tech are already expressing concern, the process of approving the facial recognition ban and new AI regulations would need to be approved by both the European Parliament and the EU member states. That is a potentially contentious process that will almost certainly involve changes and could ultimately take years to resolve. 40 European parliament MEPs have already called for the proposed legislation to be strengthened, with both a stronger facial recognition ban and a ban on the use of personal characteristics (such as sex and gender) in AI decisions.