Hundreds of multiracial people crowd headshots showing facial recognition violation of data protection laws

EU Privacy Groups File Complaint, Assert Clearview AI Facial Recognition Software Violates Data Protection Laws

Already facing substantial legal difficulties in the United States and Canada, Clearview AI’s controversial facial recognition software is facing a new challenge in the European Union. A coalition of privacy groups headed up by Privacy International has filed a set of legal complaints challenging the troubled company on the basis of violating data protection laws by “scraping” websites for photos without user knowledge or permission.

The complaints have been filed in four EU nations as well as the United Kingdom; the timeframe of the data harvesting dates back to when the terms of the General Data Protection Regulation (GDPR) were still enforced in the latter country, though it is also likely to fall afoul of the UK’s current and similar laws. Clearview AI operated in virtual secrecy for years before a New York Times expose in early 2020 drew attention to the massive scope of its data scraping, which includes some 3 billion public images harvested from sites such as Facebook and Instagram.

Facial recognition giant in hot water internationally

The complaints against Clearview AI were filed by Privacy International, the Hermes Center for Transparency and Digital Human Rights, Homo Digitalis and noyb. Privacy International has been involved in a number of high-profile data privacy cases including rulings regarding mass surveillance by national governments, and noyb has been in the news in recent months for putting an end to EU-US data sharing as a result of a decision in its long-running case against Facebook for blocking service to users that refuse to accept its data collection terms. In addition to the UK the group filed complaints in France, Austria, Italy, and Greece. Regulators in each country will have three months to respond to the complaints.

The case centers on the image scraping that Clearview AI does, a practice that has been identified (and that Clearview AI has already faced legal consequences for) in other parts of the world. For some years, Clearview AI has used the APIs of various social media services to automatically identify available images that contain human faces, download them and add it to its database containing billions of these biometric identifiers. Clearview AI primarily markets its facial recognition software to law enforcement agencies throughout the world, but also ran into trouble for providing trials of the service to retail chains and other types of private businesses.

The complaints assert that Clearview AI’s facial recognition scraping violates the data protection laws laid out by the GDPR, which stipulate that biometric identification information (such as pictures of unique facial features) cannot be collected without the consent of the data subject. There are also rules governing the sharing of such information with law enforcement agencies. Clearview CEO Hoan Ton-That says that it does not currently operate in the EU and has no customers in the region, but the complaints say that EU residents were swept up in the company’s harvesting of billions of images of people without their knowledge from various social media platforms. The complaints are based on the testimony of EU residents that used a tool that Clearview provides to allow anyone to see if their images are in its facial recognition database and request to opt out of it.

Clearview AI relied on stealth operations to avoid conflict with data protection laws

Founded in 2017, Clearview AI racked up hundreds of clients with little to no transparency into how its facial recognition product was functioning; the company may have continued to operate in the shadows if it were not for the widely-read NYT piece that came out in early 2020. Once the company’s business practices became common knowledge, both public and private pushback soon followed. A number of major social media sites said that the image scraping violated their terms of service and would be forbidden going forward. Clearview AI was also sued in Illinois last year for violating the state’s biometric privacy act and was sued in California in March for violating its new data protection laws. Canadian privacy authorities have also declared that the company’s data gathering practices are illegal in the country, and Clearview AI has voluntarily withdrawn its business there. It is also facing a probe in Australia, though the country does not have modern federal-level data protection laws similar to the terms of the GDPR.

Since Clearview AI does not presently have a headquarters or known customers in the EU, the cases are likely to go before the European Data Protection Board. In a statement on facial recognition technologies last year, the board indicated that Clearview AI’s service would likely be found in violation of personal data protection laws. It is possible for other EU member state data protection authorities to join in the case if it is determined that Clearview AI may have violated these data protection laws.

Complaints assert that Clearview AI's #facialrecognition scraping violates the #dataprotection laws laid out by the #GDPR, which stipulate that biometric identification information cannot be collected without consent. #respectdataClick to Tweet

Clearview AI’s primary remaining market is the United States, where a lack of federal data protection laws allows it to be taken up by hundreds of government and law enforcement agencies in spite of its legal issues in certain states. As of 2020 the company was believed to have about 2,200 clients internationally including the FBI, Immigration and Customs Enforcement (ICE) and the Department of Justice. The company has stated that use of its facial recognition app increased by 26% after the January 6 riot at the Capitol in Washington DC.

 

 

Senior Correspondent at CPO Magazine