Facial recognition system capturing woman's face showing observations on Clearview AI facial recognition system

Some Observations on the Clearview AI Facial Recognition System – From Someone Who Has Actually Used It …

Clearview is a facial recognition search engine licensed to law enforcement agencies by Clearview AI, Inc. that permits an investigating officer to upload a photo of an individual of interest (a possible suspect, witness or victim) and search a database compiled by Clearview of over 3 billion publicly available images posted by individuals and organizations on the web. According to the company Clearview employs state of the art facial recognition technology to try to match the image uploaded by law enforcement to Clearview’s database to try to find a match, and if a likely match is found the program displays the publicly available image located along with its associated public link. Clearview refers to its system as being like a ‘Google search for faces.’

A bit of background about Clearview

“Facial Recognition” is a term used in a broad brush fashion frequently confusing different technologies. Facial digital identity verification systems, such as implemented by credit reporting agencies, are systems where a person uploads a photo of him/herself and the system compares that photo with a photo on a government ID to verify the identity of the person. Clearview is not a facial digital identity verification system.

A live streamed facial recognition system is a system coupled to a live stream video surveillance, body camera, live drone video or similar system to identify individuals ‘on the fly.’ Clearview is not a live streamed facial recognition system.

A third type of system is a non-streamed facial recognition search engine that compares an uploaded target image of a person to an indexed database to try to find a possible match. This is where Clearview sits in the universe of ‘facial recognition’ systems – it is a non-streamed facial recognition search engine where historic photos are uploaded and then searched against a database of historic web-based photos.

Over the past few months there has been criticism of the Clearview system alleging that the system infringes the privacy of individuals, that it violates the terms of service (TOS) of the social media platforms where the images were allegedly scraped from to create the Clearview database, violates copyright laws and/or violates privacy laws in a number of U.S. states and internationally. These critics have posted their objections on Twitter, on LinkedIn, and in various printed and online media. The allegations have also included assertions of inherent racial or gender bias of search results, allegations that a ‘false match’ may result in a false arrest or investigation of an innocent person, allegations of racist connections by some of its founders and arguments that U.S. law should be changed to require a search warrant in order for the police to run a photo using Clearview or any facial recognition system.

In addition, the company has received cease and desist letters from a number of social media platforms alleging that the ‘scraping’ of the publicly available images that users voluntarily post on their site (think LinkedIn or Facebook or Instagram) violates copyright law and the terms of service of the site. Lawsuits have also been filed by a number of states and at least one class action alleging violation of state privacy laws. Clearview has denied any wrongdoing and asserts that the system and its intended use fully comports with U.S. law, and has published an legal opinion in the form of a white paper that is available on the web.

To compound the legal challenges and privacy advocates’ criticism facing Clearview, in February 2020 the company suffered a data breach wherein its client (i.e. law enforcement agencies) list was stolen although the company said that only the agency name, total number of searches and total number of accounts was taken. It denied that the target photos uploaded or other details of searches done by law enforcement was compromised, nor was the Clearview system or processes breached. Clearview said the attacker gained access through stolen credentials.

One point of commonality in the various negative commentary is that none of the authors (that I am aware of) have actually used or tested the Clearview system, and as a consequence, virtually all of the Twitter comments, social media and written commentary mischaracterize the user functionally of how Clearview actually works in the hands of a law enforcement officer as a tool in a criminal investigation. These inaccurate articles and comments then feed on themselves in terms of propagating further inaccuracies and mischaracterizations, and law enforcement agencies have been sadly unwilling to actually set the record straight so that inaccurate assertions flying around cyberspace and in the media can be corrected. Clearview’s comments are, naturally, discounted in the press as being inherently biased and self-serving.

As most of the readers of this article are not police officers with hands on experience in using Clearview or other digital tools for criminal investigations, I thought it would be useful to share my personal user experience in testing Clearview.

A few quick disclaimers and notes

I was given the opportunity to test Clearview for a few months but I have no legal or financial interest in Clearview AI, Inc., nor do I personally know any of the directors, officers or shareholders. I was offered access to Clearview in my role as a sworn police officer in the United States (part time with full time authority) with experience in criminal investigations using a variety of restricted law enforcement only databases and digital investigative tools (including Palantir).

Although I am also a full-time technology lawyer, I am intentionally not offering any views in this article related to the legal challenges to the Clearview system. Those will move through the U.S. court system and it will be interesting to see how the courts decide on those issues. I will also save my professional views relating to the use of facial recognition technology by law enforcement and whether there is any right of anonymity in public places (cyber or in the 3D world) under the law for another forum and perhaps a future article.

My sole objective in this article is to share my user experience and my views of the utility of Clearview from the perspective of a criminal investigator. The views and statements are my own made in my personal capacity and do not necessary reflect the views of Clearview AI Inc., any law enforcement agency, or any other person.

It is also important to understand that Clearview is only one of a number of facial recognition programs available for law enforcement, and in fact, there are similar systems available for public use. For example, check out TinEye (tineye.com) which is a free system that allows anyone to upload a person’s photo and search the web for matches. According to TinEye their system crawls the web and adds images to its index which has over 40.6 billion images. Other systems include Google Face Search, Betaface, FindFace and a variety of others.

The Clearview user interface and its operation

The Clearview interface is very simple in structure and operation. Once a police officer logs into Clearview an initial ‘splash screen’ is displayed. This initial screen describes how Clearview works advising the officer to upload the best photo s/he has (faces facing forward both eyes showing not wearing glasses), that the search may be saved (or not) if the officer so chooses, that all users are reminded to follow the law and only use Clearview for authorized purposes, and that matches cannot be used as evidence in court.

The admonition that matches cannot be used as evidence in court is important to understand, particularly for readers who are not lawyers or police officers. Clearview is simply a tool for use in an investigation. It provides potential leads for follow up. The system does not make decisions for officers, it only gives possible matches, and it is entirely up to the officer to decide if a ‘match’ is actually a real potential match and/or if any investigative follow up is warranted. No one gets arrested simply because of a ‘match’ using Clearview or any other facial recognition program used in an investigation. No one gets falsely identified by Clearview because all search results are only indicative and specifically not intended to positively ID anyone. Because the ‘match’ is only a potential starting point in a process to identify an individual in an investigation it does not constitute probable cause for an arrest or for the issuance of a warrant.

After the initial screen, the officer can select ‘New Search’ where s/he can upload a photo of a subject and click search. The photo could be of a potential suspect, a possible witness or a victim that the officer is trying to identify. The photo may come from one of many sources including a witness or victim’s smart phone, Ring camera or other IoT video screen shot, a physical photograph, or an ATM or security camera screen grab. The better the photo the more likely that the system may find a match in its database.

Clearview will then search its database and report ‘matches’ and if the officer selects it will also show ‘similar’ faces. What does Clearview actually display when it finds what it determines is likely a ‘match’? The system displays the photo of the person it found in the Clearview database from the public web along with a link to the photo and the website where it is located. All of this data is publicly available.

For example, when I searched my face using Clearview it found, amongst other images of me, my LinkedIn profile photo displayed with the link to my LinkedIn profile. Let’s be clear what Clearview displayed as a search result – it displayed a photo of myself that I voluntarily provided to LinkedIn when I created my account for public display and the link to my LinkedIn profile. Despite assertions to the contrary by some critics, Clearview does not display non-public personal private information to the police officer, it only displays photos and data of a person which has been publicly posted on the web. So in my case, Clearview displayed photos and links of me on LinkedIn as well as conferences that I spoke at with links to those conferences, including for example, multiple years of speaking at the RSA Cyber Security conferences (posted by RSA). If an officer clicks on the link associated with a result, it takes the officer to the web page where the image came from – that’s it. From there it’s up to the investigator to decide whether, and how best to follow up, if at all.

If as an investigator I decide that the Clearview result looks like a real match, I would click the link and to see what public information was posted by/about that person. If further follow up is warranted I would use other investigative methods and systems to further my investigation. Critics who have never actually used the Clearview system like to raise issues of ‘error rates’ or ‘false positives’ and even ‘racial bias’ but these are in actual practice non-issues. Why? Because Clearview only provides the officer with possible indicative matches based on an input image provided by the officer (or it may come up with zero possible matches). The human brain then does a wonderful job in being able to visually assess the possible match and decide whether it really is a match and if any follow up is warranted.

I decided to get a little more imaginative and test the system using various photos, and here is how the Clearview system responded:

  • I searched a photo of my law enforcement Academy graduation with me in uniform from decades ago (and yes I have aged a bit over the last 30 years) – Clearview not only found me, it found current images of me on the web (e.g. LinkedIn and RSA). I found that impressive. That means I can search a photo of a potential victim or witness or suspect from years ago and perhaps find a current match on the web.
  • I searched a photo a me holding an ID card of another person with that person’s photo on the ID card. Clearview realized that there were two faces in the photo I uploaded, mine and the photo on the ID card, and asked me which face I would like to search. I selected the face on the ID card and it correctly located images on the web that matched the photo on the ID card along with public links to those sites. This is very useful and powerful – think about an image of a person in a mirror or reflection (victim, suspect or witness) and the ability of the system to search the public web for that reflected image (depending, of course, on the quality of the input image).
  • I tried to search images of myself and others wearing glasses, hats and at angles that only showed partial faces. Clearview did not do too well in this scenario nor would any other facial recognition system. Most importantly, the system returned no matches in my test – it did not return images of people as a match who were in fact not a match – but even if it had the human officer would then make his/her own mind up as to whether or not to consider those results and follow up.
  • I searched a few people that I knew had no social media presence or likely any public web footprint. Clearview came up with zero matches as it should. Recall that Clearview’s database is only from the public web.
  • I searched faces of Asian, African American and Hispanic individuals. Clearview did well for all except for some African American test faces. A caveat here is required. In my testing, there were no controls in terms of quality of photo, position, or lighting. My testing was very antidotal as the images varied greatly, but the system failed in my test to match African American faces more than other races. However, a failure to match does not mean a mismatch or false positive match, rather the system simply provided no match and the ‘similar faces’ if displayed were clearly no where near a match. A result such as this in a criminal investigation would be useless and I would just ignore the lack of match given by Clearview and look for other ways to identify a person of interest.
  • I searched faces of a few non-U.S. residents (of Asian ethnicity living in Asia) with interesting positive results. In one search Clearview identified an Instagram photo of a third party in Asia posted some years ago (publicly) where the target subject was one of 4 people in the Instagram picture. This was a match and also quite an impressive result. In a criminal investigation when looking for victims, witnesses and/or suspects, it is very valuable to find a publicly available photo linking the person of interest to others and those individuals public social media accounts.

To sum it all up

On the whole, I found the Clearview system to be accurate overall, certainly not deceptive or biased in terms of providing ‘false positives’ and a useful investigative tool. Is it better than the other publicly available facial recognition systems on the market today? Yes, in my experience it appears to be in terms of its search functionality. In my view it does not present any more danger to privacy than any one of the various facial and image recognition search programs, including Google Face Search and TinEye, that are freely available to the public to search the web for faces.

All law enforcement agencies in the U.S. have policies for using non-public law enforcement only investigative tools/databases to restrict access, ensure authorized use and require proper case file documentation for criminal investigations. If a law enforcement agency licenses Clearview these policies are, in my view, more than sufficient to cover a system that only accesses publicly available data. By comparison, anyone (whether for lawful or unlawful purposes) can run a person’s image through Google Face Search, TinEye or other publicly available facial or image recognition web search engines with absolutely no basis or reason.

Would Clearview be more powerful if it was coupled to other non-public databases like DMV and jail and/or booking photos? Yes, of course it would be but that would also present other issues that are beyond the scope of this article – and more importantly that is not what Clearview AI is licensing to law enforcement agencies. To argue otherwise is simply setting up a straw man that does not represent reality.

The debate regarding the use of facial recognition by law enforcement as well as by private persons and commercial entities will continue (as it should) but any discussion, particularly about possible legislation or other legal limitations for the use of Clearview, should be based on facts and not assumptions or a hypothetical ‘parade of horribles’ independent of the reality of what the system actually does. I hope that this article helps others understand Clearview’s actual functionality and utility from a criminal investigator/user’s perspective so that the discussion can be kept focused, accurate and objective.