In early May, the South Wales Police posted a press release bragging about the success of their facial recognition software deployed in 2017.
Perhaps today the person responsible for redacting that press release is really reconsidering their choices, as the numbers presented are less than stellar. In fact, they’re downright disturbing, showing concerns about the risk and security landscape.
Why the police are investing heavily in facial recognition software
A face biometric system could replace a large part of the law enforcement workforce – while the upfront costs for the software are huge, automation will certainly bring the bills down in the future.
Catching those criminals with outstanding warrants is certainly appealing, as well as the idea of reducing crime in those communities which, due to budget constraints, are now underserved by law enforcement.
According to Allied Market Research, the facial recognition market is expected to grow to $9.6 billion by 2022. And in 2015, 21% of the market revenue came from the homeland security sector so, obviously, the police and other authorities are spending quite a lot on facial recognition software.
From the other side of the fence, the FBI said that more than 4,000 ransomware attacks are occurring daily since January 2016, with government institutions being primary targets.
Clearly, beyond the obvious privacy concerns surrounding facial recognition software, a discussion centered on security must take place. Let’s look at some results.
Facial recognition misidentification rates are immense
Since June 2017, South Wales police has been testing a facial recognition software at more than ten events. Out of 2,470 alerts of possible matches with suspects, 2,297 were false positives and only 173 were actual matches. Yes, that’s a 92% rate of failure to identify a suspect.
Fortunately, no arrests were made, as the officers are aware about the limitations of this technology, especially when processing low-quality images like those from CCTV.
The Metropolitan Police system fared no better – 95 people at last year’s Notting Hill Carnival were misidentified as criminals.
And in China, police officers wearing glasses with integrated facial recognition arrested 7 people in just a few days, issuing travel bans for 27 others. What is not publicized is the accuracy of the facial recognition software used in those glasses.
What about the cybersecurity risks?
The usual discussion around facial recognition used in surveillance centers, evidently, on privacy issues. What should also be discussed more is the high probability of security breaches and the volume of personal information that can leak.
While biometric data is one of the most reliable tools for authentication, it is also a major risk. If someone loses a credit card in a high-profile breach like that of Equifax, they have the option to freeze their credit and can take steps for changing the personal info that was leaked. What if you lose your face?
A 2016 report from the Center on Privacy & Technology at Georgetown Law revealed that, “One in two American adults is in a law enforcement face recognition network. [These networks] include over 117 million American adults.”
In the UK, the independent Biometrics Commissioner has attacked the Government’s practice of keeping mugshots of unconvicted citizens – about 19 million of them. “The Commissioner outlines exactly how intrusive this national database is becoming as facial recognition is applied to it. He is also damning about the lack of safeguards surrounding its use”, said Jim Killock, executive director of Open Rights Group.
What can you do when biometric information is leaked or stolen?
Around the world, biometric information is captured, kept and analyzed in quantities that boggle the mind.
As facial recognition software is still in its infancy in some ways, laws on how this type of biometric data is used are still non-existent or up for debate. And regular citizens whose information is compromised have almost no legal avenues to pursue.
The most plausible answer to the above question is “nothing”.
Cyber criminals often elude the authorities or are sentenced years after the fact, while their victims receive no compensation and are left to fend off for themselves.
What happens if the machine learning algorithms are altered or otherwise compromised?
“Such an attack was carried on biometric systems. Most biometric systems allow clients’ profiles to adapt with natural changes over time – so face recognition software updates little by little as your face ages and changes. But a malicious adversary can exploit this adaptability. By presenting a sequence of fake biometric traits to the sensor, an attacker can gradually update your profile until it is fully replaced with a different one, eventually allowing the attacker to impersonate you or other targeted clients,” explained a HBR report.
During the Hack Cambridge 2017 event, a team of security enthusiasts demonstrated how easily they could “hijack faces”. Even the builders of facial recognition software sometimes admit that their machine learning algorithms can make errors almost impossible to spot and difficult to correct. And in 2016, the Atlantic reported how facial recognition software can have a racial bias problem that’s extremely difficult to correct. Add a malicious actor who gains access to this software in the mix and you will get an explosive cocktail.
This is especially since cybersecurity practices for governmental organizations leave a lot to be desired, as budgets gets slashed and the security industry experiences a growing personnel crisis.
Governmental organizations are prime targets for attacks
Malicious actors that go for governmental organizations range from script kiddies renting botnets to APTs (advanced persistent threats) like state or nation sponsored hackers. While they may have different reasons to penetrate law enforcement servers and gain access to data, for citizens, the effect remains the same. Once their information is out there, no remediation is possible. Identity theft, reputational damage, and credit card fraud are just a few of the possible consequences.
Plenty of government websites have been infected with next-generation malware and have passed on that threat to their visitors. From the devastating WannaCry ransomware attack that started as a leaked NSA tool, to even the head of the FBI being exposed, these few incidents highlight just how facial recognition software needs to be deployed only after investing in all possible security measures.
“My SF-86 lists every place I’ve ever lived since I was 18, every foreign travel I’ve ever taken, all of my family, their addresses,.So it’s not just my identity that’s affected. I’ve got siblings. I’ve got five kids. All of that is in there,” said former FBI director James Comey about the OPM data breach that exposed 22 million federal employees.
In March 2018, online services for the City of Atlanta fell prey to a ransomware attack. While the ransom was worth $55.000 in Bitcoin, the authorities spent $2.6 million to recover from the attack.
And finally, an example directly related to biometric databases like the ones found in facial recognition software. In January 2018, unknown cybercriminals hacked into India’s Aadhaar, the world’s largest biometric ID system. They then sold access to the personal information of one billion citizens for $8. The price seems low but it is known that the cybercrimine industry follows the “trickle-down” philosophy – data was probably initially sold at huge prices to foreign cybercriminals, then repackaged into an $8 commodity. For those billion citizens whose identity was compromised, the damage cannot be calculated.
Is facial recognition software secure by design?
The lack of outside auditing of the facial recognition software purchased by the authorities and the somewhat opaque nature of public acquisitions are prime reasons why the public should question the race to implement facial recognition in surveillance systems.
A question rarely asked is “how safe is the infrastructure that holds and processes all this data?” The government sector is notorious for being low performers when it comes to cybersecurity, and the 2017 US State and Federal Government Cybersecurity Report reveals how bad they are at securing our data.
Can facial recognition software ever be secure?
With enough ill intent, technical savviness or good old human error, any system can be breached and compromised. This is one of the few areas where all cybersecurity experts agree on: Security is a never-ending, high-stakes match, where someone will inevitably drop the ball.
As long as organizations refuse to invest in multiple layers of security and do not audit their suppliers on their security practices, facial recognition software will remain inherently unsafe, especially in the hands of governments.