Increasingly widespread adoption of facial recognition technology for law enforcement purposes has sparked a heated global debate over the past year or two. Clearview AI has been one of the central points of contention, becoming something of a poster child for potential abuses and lack of transparency in such programs. The embattled facial recognition startup’s road is becoming no easier as an exposed server has been found that contained the source code for the company’s facial recognition database along with confidential keys and credentials that would grant a disturbing level of access to the company’s internal network.
Clearview AI: No longer just for law enforcement
Clearview AI attempts to sell acceptance of its product to the public by promising that only vetted law enforcement agencies are given access to it. A breach just two months ago revealed that to not be the case. The company’s client list was exposed, revealing that it has also been doing business with retail chains such as Best Buy and Macy’s. Retailers have an interest in facial recognition technology for everything from collecting marketing data to tracking potential shoplifters; customers would likely not be comfortable with just about any of these uses, but are also by and large not aware that some stores have been doing this for at least a couple of years now.
The software is now available to anyone who happened upon the exposed server during the breach window. The breach was discovered by Dubai-based cybersecurity firm SpiderSilk. As is common with these sorts of breaches, the culprit was a misconfigured cloud-based database. The server was allowing anyone who registered as a new user to access it.
In addition to the facial recognition source code, the security researchers found credentials and keys that provided access to other cloud storage buckets maintained by the company. These buckets contained complete copies of the retail apps that Clearview AI provides to its customers along with earlier versions meant for developer testing.
As if that wasn’t enough, the database also contained Slack tokens that would allow anyone to access the company’s internal communications without a password. And it contained 70,000 security camera videos from an apparent facial recognition trial program run in the lobby of a residential building in New York, showing residents entering and leaving the premises.
The company’s poor response
Clearview AI CEO and founder Hon Ton-That has issued a statement claiming that no “personally identifiable information, search history, or biometric identifiers” were exposed in the breach. The first item on that list is questionable, however, given the presence of the Slack tokens and surveillance videos.
The breach was eventually closed after researcher Mossab Hussein of SpiderSilk reported it to Clearview AI, but not without some drama. The company wanted Hussein to sign a non-disclosure agreement and essentially attempted to pay him off for it with a “bug bounty.” When Hussein refused, the company initially accused him of an extortion attempt.
Clearview claims that a “full forensic audit” has been performed and that it has found no other signs of unauthorized access. When a company makes this claim as part of their breach PR response, one is asked to believe that they were comprehensive and competent in their audit and honest about the results. Given the company’s checkered past, including concealing retail customers and discrepancies in the advertised accuracy of the product, there is little reason to grant benefit of the doubt.
Threat actors, including cyber criminals and nation-state hacking groups, are continually scanning for the same sorts of vulnerabilities using the same tools and methods that security researchers employ. It’s hardly beyond reason to think that the very active APT groups of a country like China or Russia might have come across this unsecured database during the vulnerability window, at the very least.
The case against facial recognition systems
From a purely technical perspective, Clearview AI has become popular among facial recognition software clients for its relative simplicity and its massive database. The company boasts a collection of over three billion images, many of these scraped from various public sources including social media accounts. The app also makes it possible to snap a photo with a mobile phone and immediately run it through the company’s massive database for potential matches. There has been controversy in New York City in recent months as some police officers seem to have taken it upon themselves to continue using Clearview AI in this way on their own. The NYPD tested Clearview’s app in 2019 but ultimately opted to pass on it.
Clearview AI #databreach leaked 70,000 security camera videos from #facialrecognition trial program at a New York residential building. #privacy #respectdataClick to PostThe facial recognition app was already under heavy fire prior to this data breach. Though Clearview AI began operating in 2017, the notoriously secretive company only came to widespread public attention in 2019 when it was found to be scraping major social media sites for names and photos in violation of their terms of service. The collecting and processing of the images of children in the midst of this dragnet may have caused numerous violations of COPPA. The attorney general of Vermont has filed a lawsuit against the company to prevent it from collecting the photos of residents of the state, and a number of state and city police departments (New Jersey and San Diego among them) have ordered that officers not use it.