CFR Presents

Net Politics

CFR experts investigate the impact of information and communication technologies on security, privacy, and international affairs.

Print Print Cite Cite
Style: MLA APA Chicago Close

loading...

Predictive Policing Is Not as Predictive As You Think

by Guest Blogger
January 11, 2017

CFR Cyber Net Politics Tom Cruise in Minority Report. (Dreamworks Pictures).

Share

David O’Connor is an intern for the Digital and Cyberspace Policy Program at the Council on Foreign Relations. 

The problem of policing has always been that it’s after-the-fact. If law enforcement officers could be at the right place at the right time, crime could be prevented, lives could be saved, and society would surely be safer. In recent years, predictive policing technology has been touted as just such a panacea. References to Minority Report are apparently obligatory when writing about the topic, but disguise a critical problem: predictive policing isn’t sci-fi; it’s a more elaborate version of existing, flawed practices.

Predictive policing is an umbrella term to describe law enforcement’s use of new big data and machine learning tools. There are two types of tools: person-based and location-based systems. Person-based systems like Chicago’s Strategic Subject List use a variety of risk factors, including social media analysis, to identify likely offenders. A 2015 report stated the Chicago Police Department had assembled a list of “roughly 400 individuals identified by certain factors as likely to be involved in violent crime.” This raises a host of civil liberties questions about what degree of latitude police should be granted to perform risk analysis on people with no criminal records. In the future, these questions will become ever more pressing as revelations of threat scores, StingRays and facial recognition technology continue to grab headlines.

In the present, however, the majority of publicly-known predictive policing algorithms are location-based. Twenty of the nation’s fifty largest police departments are known to use such algorithms, all of which rely on historical crime data—things like 911 calls and police reports. Based on data trends, these algorithms direct police to locations that are likely to experience crime at a particular time. Unfortunately, the Department of Justice has estimated that less than half of violent crimes and even fewer household property crimes are reported to the police. An algorithm trying to make predictions based on historical data isn’t actually looking at crime; it’s looking at how police respond to crimes they know about.

This merely reinforces the biases of existing policing practices. In October, the Human Rights Data Analysis Group released a study that applied a predictive policing algorithm to the Oakland Police Department’s drug crime records from 2010. The study found that the algorithm would dispatch officers “almost exclusively to lower income, minority neighborhoods”—despite the fact that drug users are estimated to be widely dispersed throughout the city. The predictive algorithm essentially sent cops to areas they had already made arrests, not identifying new areas where drugs might appear.

The algorithm the researchers analyzed was written by PredPol, one of the largest suppliers of predictive policing systems in the United States, and was chosen for being one of the few algorithms openly published in a scientific journal. PredPol says it uses “only three data points in making predictions: past type of crime, place of crime and time of crime. It uses no personal information about individuals or groups of individuals, eliminating any personal liberties and profiling concerns.” Ironically, these parsimonious standards ensure that the algorithm cannot improve on the historical record; it can only reinforce it.

Some systems, like IBM’s, wisely incorporate other data points like weather and proximity of liquor stores. Unlike PredPol, however, the vast majority of these algorithms are trade secrets and not subject to independent review. The secrecy around the software makes it harder for police departments and local governments to make fully informed decisions. It also bars the public from participating in the decision-making process and sows distrust.

That’s not to say that police departments shouldn’t use software to analyze their data. In fact, a 2015 study found predictive policing technology had significantly aided law enforcement in Los Angeles and Kent, England. In Norcross, Georgia, police claim that they saw a 15 percent reduction in robberies and burglaries within four months of deploying PredPol. The Atlanta Police Department was similarly enthused.

Further development of the technology is inevitable, so local governments and police departments should develop appropriate standards and practices. For starters, these algorithms should not be called ‘predictive.’ They aren’t crystal balls; they’re making forecasts based on limited data. Less obvious data points, like broken streetlights or the presence of trees, should be incorporated to refine these forecasts. As in St. Louis, they shouldn’t be used for minor crimes. Person-based algorithmic forecasts should never be accepted as meeting the reasonable suspicion requirement for detaining an individual, and only data specialists should have access to the software to reduce the chances of abuse. Police should by default incorporate racial impact assessments into their data and programs should be open to academic review. The potential for mission creep is enormous; reviews must be regular to ensure that software is being used appropriately. Most importantly, city governments and police departments should conduct a transparent dialogue with the public about what data is being collected, particularly in the cases of cell phone surveillance and social media analysis, and citizens should be able to see what data has been compiled on them, be it photos or a threat score or biometrics.

It’s a natural choice for police departments with shrinking budgets and huge troves of data to turn to machines for help. However, officials must recognize that nerds won’t solve the problem of policing; data must be a supplement to traditional, people-focused police work. Data-driven predictions have suffered many prominent setbacks in 2016. We should be cautious about using them to arrest people.

Post a Comment 1 Comment

  • Posted by GM

    The best that Predictive Policing can do is comparable to predicting the weather. It will never be 100% accurate and there is always bias and wishful thinking involved. There is also a great deal of anger when the inevitable bad forecast comes in and everyone is taken by surprise over the rain, snow, or bad storm that wasn’t expected. However, it would be foolish to stop using weather forecasting because the results are less than perfect. Not trying to direct police patrols based on data is akin to choosing random clothing before going outside every day. Shorts on January 5th? Why not? A parka on August 8th? Sure!

    Yes there is a lot of bias in the world. Everyone reading this has likely made up their mind on the topic already and is only thinking or arguments to support their view at this moment.

    The study regarding bias in Oakland has absolutely zero application to predictive policing in real life because drug dealing is precisely the type of crime that is seldom based on reports from crime victims.

    Predictive policing is only a technique and is best applied to crimes such as burglary, m/v break-ins and other crimes carried out by suspects who are prone to operating in patterns like most humans. These crimes come to the attention of police via crime reports from victims. In areas where the public might not trust the police and are less inclined to report crime the bias would be to direct police to areas where crime is reported. usually, more affluent areas.

    The critics and academics with theories on this topic are well intentioned but truly show that they have absolutely no idea of how things work in the real world.

    Those peddling predictive policing software also need to realize that selling black boxes only makes people fear what is in the box. There is no magic to these programs and it is only like counting cards in a casino or trying to predict the weather. These tools only up the odds over the traditional random, reactive patrol. A well trained crime analyst using low-cost, free and open source tools who is looking at tactical and strategic solutions to problems will blow away any of these software packages in the long run.

Post a Comment

CFR seeks to foster civil and informed discussion of foreign policy issues. Opinions expressed on CFR blogs are solely those of the author or commenter, not of CFR, which takes no institutional positions. All comments must abide by CFR's guidelines and will be moderated prior to posting.

* Required

Pingbacks