Does Predictive Policing Really Result in Biased Arrests?

In experimental rollouts across the nation, predictive policing models have shown remarkable ability to help police officers and other law enforcement officials clamp down on illegal activity and reduce crime. The only question, however, is whether these predictive policing methods lead to systematic bias against certain minority communities or ethnic groups. According to the latest study led by George Mohler, a researcher at Indiana University – Purdue University Indianapolis (IUPUI), there is no statistically significant evidence of racial bias.

This IUPUI study, which has been touted in the media as the first study of its kind to look at real-time patrol data, seems to suggest that local communities in the United States should feel safe in giving the green light to future predictive policing rollouts. But is that really the case?

Inside the new study on predictive policing by IUPUI

The study, based on arrest data provided by the Los Angeles Police Department (LAPD), looked at arrest incidents based on empirical field trials. This focus on real-time patrol data from the field is an important distinction because previous studies on predictive policing and bias have been based on simulated patrols with historical data.

When the researchers looked at the actual real-time data that was flowing in, they could not discern any major differences in arrests made on the basis of policing strategy suggested by a human analyst and the policing strategy suggested by a computer algorithm. In other words, computer algorithms did not display bias as part of their crime forecasting.

Before you continue reading, how about a follow on LinkedIn?

While the researchers did concede that higher levels of arrests were made in certain geographic areas of the city, this could be explained by the fact that certain areas are always high crime areas. In these high crime areas, you will always have a statistically higher level of arrests than in a relatively low crime area. In layman’s terms, sending out a patrol unit to a rough neighborhood known for drug use and prostitution is always going to result in a higher number of arrests than if that patrol had been sent to an upscale, quiet neighborhood. Makes sense, right?

(Editor’s note: George Mohler is the lead researcher for the IUPUI study and is also co-founder and board member of PredPol, a predictive policing company.)

The potential bias of predictive policing

Interestingly, previous studies have hinted at potential bias associated with predictive policing, including one well-known study that looked at Oakland drug arrest data. Using predictive policing algorithms, police officers tended to make more biased stops and arrests than if they had been using traditional policing strategies.

What it all comes down to, apparently, is the data set that is being used. As noted above, historical data and computer simulations tend to suggest racial and discriminatory bias, while this new IUPUI data set based on real-time, empirical data does not.

As George Mohler of IUPUI points out, a lot comes down to the type of data being used by the computer algorithms, “One important consideration is what data to use as input into an algorithm. Certain data, for example drug arrests, may have bias to begin with and therefore an algorithm using the data will also be biased.”

As a result, Mohler’s research team tried to minimize the impact of any bias, “In our experiment we focused on using event data taken from reports by victims of burglary and motor vehicle theft. These types of events may have less room for bias given that they are not largely driven by discretionary arrests. Police departments should also collect data on when and where they are making patrols based upon predictive algorithms. They can then analyze the demographic distribution of residents in those areas and monitor whether certain populations are receiving more or less patrol. Third, there is some new research on how to remove bias using algorithms that have fairness built into them. Our research group is doing some work in this area.”

And, yet, predictive policing programs still have a negative connotation in many large urban areas. Members of racial and ethnic minority groups fear that data used as part of these predictive policing programs will inadvertently ensnare them when they have done nothing wrong and lead to biased arrests. In some cases, in fact, they might lead to bias against women.

Here’s just one example: in some cases, urban police departments use social media data to help construct sophisticated organizational charts of crime groups. But sometimes that social media data can be misused. What if you are just a normal, law-abiding woman but one of your family members happens to be in a gang or crime cartel? It is quite possibly that your name might pop up in police reports for the sole reason that you are related to someone else, but not for any action you have taken in the past. As a result of predictive crimes, you might be called in for questioning or even arrested.

As even the IUPUI researchers would probably admit, the real value of prediction models to prevent crime is in low-income neighborhoods rather than upscale, suburban neighborhoods. Thus, members of certain racial and ethnic groups may feel unfairly targeted – especially since previous implementations of predictive policing programs (including one high-profile example in New Orleans) have been notoriously blanketed in secrecy and opaqueness. In some cases, low-income and racially diverse neighborhoods had no idea that police officers were using data-driven predictive policing models to target their areas.

The privacy implications of predictive policing

While most people would not argue with traditional policing tactics – which involve the analysis of “hot spots” to properly allocate law enforcement resources – there is much more trepidation about predictive policing, mostly as a result of the huge potential privacy implications.

In the race to create better and better data models (such as those sold by PredPol), there is the risk of over-reaching the types of data that is being pulled into the model. For example, there is nothing particularly menacing about police departments using historical crime data to predict future crime. That is already what is being done with traditional “hot spot” analysis. However, the real danger is when other data is used – such as social networking data, health data or financial data.

What happens, for example, if the major credit bureaus start sharing data with predictive policing models? Or if local organizations in the community – such as churches or political groups – start sharing data with police officers? Or if police departments are using machine learning to connect the dots without any human analyst oversight?

The surging popularity of AI and machine learning makes it critical for researchers to focus on how to eradicate bias from predictive policing algorithms. Failure to do so could transform them into inscrutable black boxes, where humans really don’t know how the machines came up with their predictive policing recommendations.

Mohler, for example, notes, “There is some new research on how to remove bias using algorithms that have fairness built into them. Our research group is working on constructing point processes that input census data in addition to crime incidents. The algorithm then detects if certain demographic groups are receiving more attention from police and adjusts the algorithm to make predictive policing more fair.”

There should be a greater effort to control how these predictive policing models can be used. Remember – many of the first predictive policing models were created to track down terrorists, many of whom were members of “underground” organizations where it was very hard to understand how they were operating using traditional models. These models were not designed to track and monitor everyday citizens in Western democracies.

Final thoughts on predictive policing

In many ways, predictive policing is here to stay. There is no putting the genie back into the bottle. However, that doesn’t mean that local communities need to put up their collective hands and simply accept the new reality. In any local community, what is needed is full transparency and trust.

As IUPUI researcher George Mohler points out, there is a need to be as transparent as possible with new reports and studies – and to make these resutls public whenever possible, “We have published the results of our studies in Los Angeles. There have also been experiments conducted in Chicago and Philadelphia, which have been published.”

Do #crime fighting #predictivepolicing models lead to systematic bias against minority communities or ethnic groups? Click to Tweet

This spirit of transparency and openness is crucial. If there is not a two-way dialogue between police departments using this sophisticated new technology and the communities being policed, it is almost certain that there will be continuing debate over the public policy and privacy implications of predictive policing.

 


Leave a Reply

Please Login to comment
  Subscribe  
Notify of

Follow CPO Magazine