At one time, facial recognition technology held an enormous amount of promise as a way to improve everything from law enforcement to the way that you log into your digital devices. But now the backlash against facial recognition technology is growing. In some cases – as in the case of the city of San Francisco – politicians are calling for an outright ban on further use of the technology. And now it looks increasingly likely that the U.S. Congress might introduce bipartisan legislation to limit how facial recognition technology can be used, and by whom.
Congressional panel on facial recognition technology
Recently, for example, the Congressional House Oversight and Reform Committee hosted a hearing on the topic of facial recognition technology, in order to find out more details about how artificial intelligence that powers the technology is currently being used, and what some of the potential abuses of the technology might be. As part of the hearing that might eventually lead to a new federal law, the lawmakers heard from a wide range of experts, including legal scholars, privacy advocates, algorithmic bias researchers, and law enforcement agencies.
What became increasingly clear throughout the course of the hearing was just how much bipartisan support has formed around the idea of regulating facial recognition systems to protect civil liberties. Outspoken Congressman Jim Jordan (Republican – Ohio), for example, noted that “no elected officials gave the OK” on the use of the technology. Moreover, he suggested that facial recognition technology, as it is currently deployed, might violate both the 1st and 4th amendments of the U.S. Constitution.
One dangerous scenario, from a constitutional perspective, is that law enforcement officers might abuse the technology as a result of the deep biases and inaccuracies that are at the core of the technology. One Georgetown Law study found that the New York Police Department (NYPD) abused the system by feeding a sketch of Hollywood actor Woody Harrelson into their system, in order to see which people it might identify. Another concern is that so-called “face surveillance” (i.e. real-time dragnets) might become the new norm.
Before you continue reading, how about a follow on LinkedIn?
Problems with facial recognition technology
These fears about facial recognition technology are not entirely unfounded. A series of research studies have shown that facial recognition technology might not be nearly as accurate as some people think. For example, the MIT Media Lab recently examined Amazon’s Rekognition technology, and found that this facial recognition technology was woefully inaccurate. It identified women as men 20 percent of the time, and it identified female people of color (i.e. women with darker skin color) as men 33 percent of the time. One time, Amazon’s Rekognition technology even matched the faces of 28 members of Congress to criminal mug shots in a database.
Those findings support other studies showing that facial recognition technology is still largely inaccurate on faces of both women and minorities. The problem, quite simply, is that any AI-powered technology like facial recognition technology is only going to be as accurate as the data set used to “train” it. In short, if you show a computer thousands of faces of white men, do you realistically expect that the computer will somehow be able to identify faces of women and minorities?
This is where things get really dangerous, because it can lead to what technology researchers refer to as “false positives.” In false positives, a computer thinks it has found a match, but it really hasn’t. This can lead to the arrest of the wrong person. At the very least, it can lead to some very awkward situations, in which completely innocent people are falsely accused of actions they never committed. In one study, for instance, researchers found that the FBI database of faces turned up false positives 15 percent of the time. And the figure was even higher for women and minorities.
To see how all this racial and gender bias plays out in real life, consider the case of a young, 18-year-old minority male who was arrested on charges that he was the criminal behind a series of Apple Store thefts. It turns out that the police got the wrong guy, and now this individual is bringing a $1 billion lawsuit against Apple. Or, consider the case of the airline JetBlue, which was checking passengers into their flights by using their faces without their consent. From apartment buildings with face entry systems to ride-sharing companies like Uber, new cases involving facial recognition tech continue to pop up.
All of the big tech giants involved in facial recognition technology – Amazon, Apple, and Microsoft – have been involved in one scandal or another. At times, these data privacy scandals have verged on being national security scandals as well. Microsoft, for example, has worked with a military-run university in China on facial recognition technology, thereby being complicit (to at least a limited degree) in the Chinese government’s abuse of the technology to track its citizens.
The case for regulating facial recognition technology
With all this as context, it’s perhaps no surprise that momentum for regulating facial recognition technology is gaining in intensity across the political spectrum. For lawmakers, it’s low-hanging fruit, and a chance for them to flex their muscles on data privacy. When constituents badger them about doing more to rein in the data privacy abuses in Silicon Valley, they can point to their work on clamping down on facial recognition technology.
The big question, of course, is whether state and local authorities are going to act before federal authorities can. In addition to San Francisco banning the use of facial recognition software by its citywide government agencies, other cities like Oakland and Somerville, Massachusetts are also floating their own proposals to stop facial recognition technology.
So is there anyone beyond the big tech vendors who is willing to step up and support facial recognition technology? It turns out that Amazon shareholders, when forced to vote on whether or not the company should continue to sell its Rekognition facial recognition technology to law enforcement agencies and the government, did not decide to limit the company. The money to be made by selling this technology, apparently, is just too great.
Which brings us back to the central problem faced by regulators and politicians anytime they try to crack down on data privacy abuses: big corporate interests are never going to back down in a fight when there are tens of millions (if not hundreds of millions) of dollars at stake. If Google and Facebook won’t back down, do you really think Amazon will?
U.S. Congress might introduce bipartisan legislation to limit #facialrecognition technology, what are the problems and the need for regulation? #respectdata Click to Tweet
So get ready for a potentially contentious 2019 and 2020, as the big tech giants marshal all their lobbying firepower in Washington, D.C. and start to fight back. No doubt, you’ll start to hear about new studies showing vast improvements in facial recognition technology (surprise, the racial and gender bias is gone!), and new rumors that the big tech giants have found a way to “self-regulate” themselves. But don’t be fooled. We’ve heard that story before.