The state of Illinois has made news in recent months for its uniquely strong legislation governing facial recognition technology. The state requires that prior notice be given and consent be obtained for any collection of biometric information, and is the only law of this type in the United States that allows individuals to file a civil action in response to a violation.
A newly proposed federal bill would apply a similar level of facial recognition technology regulation across the country. Introduced by Sens. Jeff Merkley and Bernie Sanders, the National Biometric Information Privacy Act would implement two of the central provisions of the Illinois law: the requirement that written consent be collected before any biometric data is recorded, and the ability of both private citizens and state attorney generals to sue companies that violate these terms.
Private sector abuse of facial recognition technology in the crosshairs
To date, most of the concern about facial recognition technology has been centered on the potential for law enforcement abuse. Even the criticism of private companies (and the civil cases brought under Illinois law) has mostly been directed at contractors that primarily service law enforcement agencies, such as Clearview AI.
While those remain serious and immediate concerns, the new bill expands in scope to encompass direct use of facial recognition technology by private companies in their retail locations and workplaces. It even addresses the reach that some companies have into residential neighborhoods; for example, potential integration with home security services like Amazon’s Ring and Google’s Nest. It also specifies that this consent must be separate and clear from any other agreement; for example, it can’t be bundled with an employment contract or terms of service. “We can’t let companies scoop up or profit from people’s faces and fingerprints without their consent … We have to fight against a ‘big brother’ surveillance state that eradicates our privacy and our control of our own information, be it a threat from the government or from private companies,” Merkley said in a press release.
One of the recent events that encapsulates the motivation for this bill is the July Reuters report that revealed national retail chain Rite-Aid had deployed facial recognition technology for anti-shoplifting purposes at some 200 of its stores. While leaks earlier this year had revealed that Clearview AI was at least in discussion with a number of retail chains about this sort of thing, the Rite-Aid revelation was more concerning in that the company contracted with a China-based technology provider (DeepCam LLC) that is funded in large part by a Chinese government grant. There has been considerable global concern about elements of the expansive system used by the country’s authoritarian government being exported to other parts of the world via companies such as these.
The bill includes some additional measures. It would require companies to publish a public facial recognition technology usage statement, including reasons for collecting the data and a retention & removal schedule. Companies would not be able to profit from biometric data, would not be able to use it for advertising purposes and would not be able to trade it or make it available for use to other organizations. Individuals would also be able to access their biometric data upon request.
Individuals bringing a lawsuit would be able to recover actual damages in cases where negligence can be demonstrated, or at least $1,000 in liquidated damages. Among other organizations the American Civil Liberties Union, Electronic Frontier Foundation and Open Technology Institute are supporting the new facial recognition technology regulation.
Bringing order to a lawless industry
There is presently no real federal regulation of facial recognition technology in the United States. While biometric information is regulated within certain contexts, the most notable being the HIPAA regulations that cover the healthcare industry, there really isn’t anything to stop private businesses from collecting and using facial recognition markers on their property.
There is some regulation in individual states, with Illinois being the strongest and most notable example (and also the oldest of these measures, first adopted in 2008). Five other states have a privacy law that addresses biometric information in some way, including the California Consumer Privacy Act (CCPA), but none are nearly as strong in terms of collection regulations and potential consequences. The CCPA identifies facial features as a protected category of personal information, but is limited mostly to “right to know” and “right to opt out” measures with a reduced ability to recover damages.
A number of major cities have banned police use of facial recognition technology, and at least one (Portland, OR) has discussed banning it for private business use as well. Many more cities have limited its use in government agencies, banning it from use in police body camera systems and restricting it to use in only certain types of violent crimes.
This debate is part of a larger ongoing struggle to determine if a federal privacy law is appropriate and what it should look like if it is. There is considerable bipartisan support for a federal scheme, and big tech companies are also conditionally on board (provided they can have influence on the drafting of it). The Covid-19 pandemic has appeared to increase overall support for such a measure due to a perceived ease in implementing contact tracing, but it does appear to have also stalled progress on the several potential bills that were kicking around in Congress.