Collecting facial recognition data from people on the street showing Clearview AI violation

Unwelcome in Another Country, Clearview AI Forced to Delete Facial Recognition Data in Australia

Australia’s privacy commissioner has ruled that Clearview AI has violated the privacy of the country’s residents, and that means it will be forced to delete its cache of their facial recognition data.

While not a direct order to leave Australia, the move will likely make Clearview AI’s business non-viable there. It may also push the company out of the United Kingdom in the near future; the investigation was conducted jointly with the UK Information Commissioner’s Office (ICO), which said it is considering regulatory action.

Clearview AI must give up Australian biometric information

The Office of the Australian Information Commissioner (OAIC), headed up by Australian Information Commissioner and Privacy Commissioner Angelene Falk, determined that Clearview AI breached the privacy of Australians by collecting their facial biometric information without permission and not taking “reasonable steps” to comply with applicable national privacy principles, verify the accuracy of the collected information or notify impacted parties about disclosure of potentially sensitive personal information.

Specifically, Clearview AI was found to have breached the Australian Privacy Act 1988 in its “unfair” collection of the facial recognition data and its failure to adhere to the procedures laid out in the Australian Privacy Principles. Clearview AI’s methods of collection have already gotten it into trouble in other countries for similar reasons; the company’s database of three billion faces was primarily built by scraping public pictures on social media platforms without the knowledge or consent of anyone involved.

Clearview AI will now be required to destroy any facial recognition data collected in Australia and to cease collecting related information in the country. Commissioner Falk said that in addition to the technical violations of the country’s law and regulations, the company’s manner of scraping personal information posed a significant risk to vulnerable groups such as children and victims of crime.

Falk echoed a position articulated not only by various national and regional governments in regulating Clearview AI, but also by the social media platforms that it scraped from (and that have largely banned the upstart company as a result): users of social media do not have a reasonable expectation of being added to an unaccountable third-party database of this nature when they upload pictures of themselves to the service.

The Australian Federal Police had reportedly been considering onboarding Clearview AI, trying the system out between October 2019 and March 2020. A separate investigation is looking into exactly what the agency did with the technology during that time and if any violations of privacy regulations occurred.

Use of UK facial recognition data also in question

The UK ICO has not taken any action against Clearview AI’s collection of facial recognition as of yet, but commissioner Elizabeth Denham said that the joint investigation was intended to be for the benefit of citizens of both the UK and Australia.

Clearview AI issued a statement in response saying that it does not do business in Australia and that its facial recognition data collection has been misinterpreted by the investigating agencies. While it may be true that Clearview AI does not presently have any clients in the country, it has been established that it demonstrated its systems for the Australian federal police force during late 2019 and early 2020. A lawyer for Clearview AI said that the company intends to appeal the decision.

The company’s position has always been that since the images it collects were made available to the public, it did nothing wrong by engaging in bulk collection of them. Tech regulation in many countries has been slow to catch up with this sort of thing, but Clearview AI found itself in trouble in the state of Illinois due to violating laws regarding the collection of biometric identifying information. The social media platforms that Clearview has raided its inventory from tend to take the same view. Google, YouTube, Twitter and Facebook have all issued cease-and-desist letters to the company forbidding it from using their APIs for mass picture scraping.

After a privacy investigation in Canada determined that the company needed to obtain subject consent to collect facial recognition data, Clearview AI opted to withdraw its product from the country. The company was also banned from Apple’s App Store in 2020 after reporters from Gizmodo found its software, which is supposed to be made available only to legitimate law enforcement agencies, sitting in an unsecured Amazon AWS bucket with instructions attached for sideloading it onto Apple devices.

Clearview AI was found to have breached the Australian Privacy Act 1988 in its ‘unfair’ collection of #facialrecognition data and its failure to adhere to the procedures laid out in the Australian #Privacy Principles. #respectdataClick to Tweet

Despite the strong ruling against the company, Falk said that the issues of facial recognition data raised here should prompt a review of Australia’s 30-year-old Privacy Act, with prohibitions against data scraping and requirements for social media platforms to protect users from it as central considerations.

 

Senior Correspondent at CPO Magazine