Photos of children and adults showing the concerns raised over the new facial recognition tool for law enforcement
New Facial Recognition Tool for Law Enforcement Raises Alarm by Byron Muhlberg

New Facial Recognition Tool for Law Enforcement Raises Alarm

Controversy is mounting over the facial recognition tool that law enforcement agencies are increasingly making use of, prompting concerns around data privacy and a perceived exercise of ‘Big Brother’ authoritarianism.

At the center of all the action is a little-known Manhattan-based tech firm whose operations are shrouded in secrecy. Founded in 2017 by software developer Hoan Ton-That, the firm is called Clearview AI, and it makes its business in the research and development of facial recognition technologies for use in law enforcement agencies around the globe.

According to a recent New York Times investigation in which a bulk of new information on the company emerged, Clearview AI develops overtouted products for law enforcement agencies which infringe on the privacy of individuals.

The controversy arises from the fact that Clearview AI scours through a database of more than 3 billion photographs of people from around the world. Kashmir Hill of the New York Times reports that these photographs are allegedly being scraped duplicitously from off of the Internet, typically from social media profiles.

New frontiers for facial recognition tool

Despite its relatively small company size, Clearview AI carries monumental influence with respect to how its products have shaped and impacted the privacy of individuals. The company gobbles up the facial recognition tool market share for police, for instance, with more than 600 new law enforcement agencies allegedly having joined Clearview AI in 2019.

Before you continue reading, how about a follow on LinkedIn?

In essence, Clearview is busy carving a way forward in an industry where few others have dared to tread. Prior to the company’s explosive growth over the course of the past 24 months, state agencies and Silicon Valley giants had both steered well clear of throwing their weight behind a facial recognition tool aimed at the law enforcement market.

In 2011, for example, former Google CEO Eric Schmidt defended his company’s decision to cease the development of a facial recognition tool that they had been working on. He explained to a conference in California that “As far as I know, [facial recognition] the only technology Google has built and, after looking at it, we decided to stop.” Schmidt went on to express his concerns over the “union of mobile tracking and face recognition,” affirming his belief that the technology might end up being used “in a very bad way” in spite of its indisputable benefits.

Nine years later, the use of facial recognition by law enforcement had exploded nevertheless — presenting an immense challenge with respect to the privacy of ordinary people.

While it would indeed be true to point out that law enforcement agencies have been using facial recognition for going on two decades now, it was not up until recent years that their usage has amounted to a significant concern for privacy.

In the past, for instance, police departments had been restricted to scouring through images from a government database. Such images would have been limited mostly to driving license photos and mugshots. Within the past five years, however, the typical facial recognition tool has become both cheaper to produce and more accurate to use, with big companies (e.g. Amazon) bringing new products to market which make use of facial recognition technologies.

Big decisions for law enforcement

In spite of Clearview AI’s unprecedented growth, the benefits won by law enforcement in making use of its facial recognition tool are not all that they are cut out to be, with the company falling short on many of its own claims.

Since the rapid proliferation of its app, for instance, Clearview AI has faced criticism for buttering up its software’s effectiveness. Many such claims have not undergone independent verification, with the company avoiding submitting its technology to a National Institute of Standards and Technology (NIST) review on various facial recognition software back in December 2019.

Following the stir caused by The New York Times investigation, the US state of New Jersey has begun taking steps to curb the use of Clearview AI’s facial recognition app. According to the American Civil Liberties Union (ACLU), the Attorney General of New Jersey has “put a moratorium on Clearview AI’s chilling, unregulated facial recognition software,” describing the move against the company’s facial recognition tool as “unequivocally good news.”

Clearview AI gobbles up #facialrecognition tool market share for #lawenforcement with more than 600 new agencies in 2019. #respectdata Click to Tweet

The manner in which police departments embrace new technologies in the coming years — especially in relation to the mounting skepticism over the use of Clearview’s facial recognition tool — will likely come to play a defining role in shaping the legal discourse between security and privacy. For the time being — while scraping the internet for images of ordinary people does indeed tread an ambiguous line between the lawful and the unlawful — such behavior is undoubtedly contributing to a growing sense of mistrust around the use facial recognition technology in public life.