Image of sirens on police car signifying the use of predictive policing and the concerns of privacy and human rights
Predictive Policing Raises Important Privacy and Human Rights Concerns

Predictive Policing Raises Important Privacy and Human Rights Concerns

It almost sounds like a scene out of the Hollywood blockbuster movie “Minority Report” – police departments around the world, from Los Angeles to China, are embracing new predictive policing technology that will help them spot criminals before a crime ever takes place. However, communities often have little or no idea of why or how this technology is being used, and that raises some important privacy and human rights concerns. What data, exactly, are police departments using, and how is that impacting the way they do their job?

Predictive policing as the future of law enforcement

The current predictive policing initiatives can be traced back to the mid-2000s, when the goal was to anticipate, prevent and reduce crime, not just respond to crime. The idea was that the use of several cutting-edge crime forecasting data analysis tools – everything from data mining to geospatial prediction – could be used very effectively by law enforcement officials for deploying resources more effectively and predicting crimes.

There was nothing particularly attention-getting about these early initiatives from a privacy perspective, mainly because they were seen as just an extension of what law enforcement agencies had already been doing for decades. For example, police departments around the world know that certain events – such as New Year’s Eve celebrations – can require additional policing efforts. So why not make use of data already on hand to “predict” potential hot spots and prevent crime before the trouble ever takes place?

And sometimes this data analysis uncovered some unknown patterns and trends. For example, one predictive policing initiative in Texas discovered an unknown link between burglaries and housing code violations. “Fragile neighborhoods” where housing was sub-standard were suddenly flagged for greater police resources, and that led to a reduction in crime. Just by being more visible in these neighborhoods, police could send a warning signal to potential criminals.

The explosion of non-traditional data available for predictive policing

But something very interesting started to happen around 2009 – the big tech companies of Silicon Valley started to get involved. The “Big Data” trend was just underway, and police departments around the world started to realize that they had a wealth of information and non-traditional data that they could tap into as part of their new predictive policing efforts. For example, “social network analysis” suddenly became a powerful tool in the hands of police departments. Just by knowing whom criminals were talking to on social media, police officers could start to piece together some very intricate criminal networks.

Palantir and the privacy issues raised by predictive policing

One of the biggest test cases took place in New Orleans, where predictive policing utilizing sophisticated data mining tools from Silicon Valley’s Palantir was able to uncover ties to other gang members, outline extensive criminal histories, and even flag individuals who might become future gang members. Needless to say, the arrests went up, the prosecutions went up, and the New Orleans Police Department won acclaim for its policing efforts.

However, there were a few problems here from a privacy perspective. First and most importantly, someone forgot to tell New Orleans city council members or members of the local community. As one politician notes, “No one in New Orleans even knows about this.” And there was a good reason for this – the Palantir initiative was budgeted as a “philanthropic venture,” so there was no public vetting of the program. It flew under the radar without setting off any privacy fears.

And there were other troubling problems with this Palantir predictive policing experiment in New Orleans. For example, it tended to have an outsized impact on poor communities of color. Moreover, as experts now point out, the Palantir experiment had the very potential to sweep up innocent people who are related to criminals through several degrees of separation. For example, a cousin of a known drug dealer might be called in for questioning – despite having no criminal background and no reason to be suspected, other than casual social connections via Facebook. This is a clear invasion of privacy.

Moreover, it is still debatable if historical criminal data can accurately predict future criminal activity. But other police departments have largely ignored that uncertainty and grey area around the privacy rights of individuals. For example, both the Los Angeles Police Department (LAPD) and Chicago Police Department (CPD) have both rolled out ambitious predictive policing initiatives. Using technology as a cover for their actions, police chiefs can claim that better crime data lets them make smarter decisions and lead more effective investigations.

Human rights abuses in China stemming from predictive policing

Inevitably, predictive policing was going to be used for purposes for which it was never intended. The primary reason for the creation of the Palantir technology, in fact, was to stop terrorists. Then, mission creep led to the technology being used to stop dangerous criminals. And now the trend seems to be that the technology is being used to stop dangerous political dissidents, with serious implications for racial justice and civil liberties.

Consider the case of Xinjiang province in China, which has become a hotbed for potential new human rights abuses stemming from predictive policing. The claim by human rights activists is that Big Data is being used to fuel a crackdown on ethnic minorities. In fact, any sign of “political disloyalty” can now be used to detain an individual and send him or her to an extralegal political education center.

And the scale of the data being used as part of these human rights abuses is staggering. Just about anything can be used to prove political disloyalty – bank records, health records, vehicle ownership and even Wi-Fi activity. Add in the fact that data from security cameras are being used for facial recognition, and the local Chinese police have a very powerful way to crack down on just about anyone at any time.

Even worse from a human rights perspective, this technology is very much a “black box.” Once the data has been entered into a computer and a result has been generated, it is almost impossible for individuals to challenge the outcome. This raises the scary human rights issue of arbitrary detainment. It almost sounds like a plot out of a novel by Orwell or Kafka – a citizen arbitrarily detained, for no apparent reason, with no chance of appeal. Even worse, data is being actively shared not only with the Xinjiang police, but also with officials within the Chinese Communist Party.

Predictive policing programs continue to proliferate

The perceived success of these predictive policing programs in reducing crime is what makes it nearly inevitable that they will continue to proliferate. For example, Palantir used the stunning success of the New Orleans predictive policing program (the same one that city council members didn’t even know about) to sell its technology to other police departments. The Chicago Police Department, for example, plans to double the number of police districts utilizing predictive policing programs in 2018. And it’s easy to see that smaller police departments around the nation will follow their lead.

Looking ahead, there needs to be more public awareness of the scale and scope of these programs, as well as the potential for human rights and privacy abuses within the criminal justice system. There is nothing inherently wrong with using data to map crime and predict criminal activity, but there is something wrong if there is no transparency about how the data is being used, and if communities and politicians have no idea of predictive policing programs currently in existence. That lack of transparency, unfortunately, can lead to even more serious examples like Xinjiang, where privacy and human rights abuses are now ingrained into the system. Is that a future that we really want?