FYI.

This story is over 5 years old.

Tech

Academics Confirm Major Predictive Policing Algorithm is Fundamentally Flawed

PredPol uses an algorithm based on earthquake prediction to “predict crime.” Academics say it’s simplistic and harmful.
Surveillance compilation image with the PredPol algorithm.

Last week, Motherboard published an investigation which revealed that law enforcement agencies around the country are using PredPol—a predictive policing software that once cited the controversial, unprovenbroken windows” policing theory as a part of its best practices.

Our report showed that local police in Kansas, Washington, South Carolina, California, Georgia, Utah, and Michigan are using or have used the software. In a 2014 presentation to police departments obtained by Motherboard, the company says that the software is “based on nearly seven years of detailed academic research into the causes of crime pattern formation … the mathematics looks complicated—and it is complicated for normal mortal humans—but the behaviors upon which the math is based are very understandable.”

Advertisement
1550163519102-Screen-Shot-2019-02-05-at-94906-AM

Image: A screenshot from a slide presentation titled “Predictive Policing Tacoma Overview Deck (2012 July)” obtained via public records request from the Tacoma police department.

The company says those behaviors are “repeat victimization” of an address, “near-repeat victimization” (the proximity of other addresses to previously reported crimes), and “local search” (criminals are likely to commit crimes near their homes or near other crimes they’ve committed, PredPol says.)

But academics Motherboard spoke to say that the mathematical theory that is used to power PredPol is flawed, and that its algorithm—at least as pitched to police—is far too simplistic to actually predict crime.

Kristian Lum, who co-wrote a 2016 paper that tested the algorithmic mechanisms of PredPol with real crime data, told Motherboard in a phone call that although PredPol is powered by complicated-looking mathematical formulas, its actual function can be summarized as a moving average—or an average of subsets within a data set.

The self-exciting point process model of burglary.

“The level of simplicity there is buried in all the talk about using these fancy seismographic models with aftershocks,” Lum said. “In practice, at least for the data that I as a researcher have looked at, it reduced to, for the most part, not anything that was really significantly different than just a moving average.” Basically, PredPol takes an average of where arrests have already happened, and tells police to go back there.

The academic foundation for PredPol’s software takes a statistical modeling method used to predict earthquakes and apply it to crime. Much like how earthquakes are likely to appear in similar places, the papers argue, crimes are also likely to occur in similar places.

Advertisement

PredPol’s chief data scientist, George Mohler, co-authored academic papers that are frequently cited in PredPol materials reviewed by Motherboard. The first of these was the paper "Self-exciting point process modeling of crime," which was published in the Journal of the American Statistical Association in 2011. Then, a “Geographic Profiling from Kinetic Models of Criminal Behavior” published in 2012 and "Randomized Controlled Field Trials of Predictive Policing” published in 2015 expand upon the theory introduced in the 2011 paper.

Suresh Venkatasubramanian, a professor of computing at the University of Utah and a member of the board of directors for ACLU Utah, told Motherboard that earthquake data and crime data are, naturally, collected in different ways.

“I would say in our mind, the key difference is that in earthquake models, you have seismographs everywhere—wherever an earthquake happens, you’ll find it,” Venkatasubramanian said. “The crux of the issue really is that to what extent are you able to get data about what you’re observing that is not also totally on the model itself."

In other words, we can assume that we’ll gather data about any earthquake that happens, anywhere on Earth. But in the case of crime, a number of factors affect our criminological data. For instance, some communities are more likely to call the cops than others, and some crimes are more likely to go unreported than others. Also, cops have a lot of individual leeway in deciding whether or not to arrest someone. In cities that have operated using a “broken windows” ideology—including New York, Los Angeles, Boston, and many others—police are explicitly encouraged to look for and harshly penalize petty crime that may go unnoticed in other neighborhoods.

Advertisement
The probability of crime in a particular area at a particular time.

When a tool like PredPol tells police where to go, crime data starts to be affected by PredPol itself, creating a self-reinforcing feedback loop. Venkatasubramanian cowrote a paper on this subject in 2017 paper titled "Runaway Feedback Loops in Predictive Policing."

“If you build predictive policing, you are essentially sending police to certain neighborhoods based on what what they told you—but that also means you’re not sending police to other neighborhoods because the system didn’t tell you to go there,” Venkatasubramanian said. “If you assume that the data collection for your system is generated by police whom you sent to certain neighborhoods, then essentially your model is controlling the next round of data you get.”

In essence, the data that is returned quickly becomes flawed by the software itself.

“Because this data is collected as a by-product of police activity, predictions made on the basis of patterns learned from this data do not pertain to future instances of crime on the whole,” Venkatasubramanian’s study notes. “In this sense, predictive policing is aptly named: it is predicting future policing, not future crime.”

This is not what PredPol’s marketing and training material, acquired by Motherboard from the Tacoma, Washington police department, says though. The company sells its crime forecasts as rational, objective crime predictors.

“PredPol is not a guess or a hunch,” one document reads. “Predictions are based on your hard data about where and when crimes have occurred. PredPol uses mathematical models to tell you where the most likely locations for crime to occur today.”

Advertisement

Motherboard asked PredPol whether the theory of “self-exciting point” modeling remains the academic basis of PredPol’s software in 2019, but PredPol did not respond to Motherboard’s request for comment.

Venkatasubramanian told Motherboard that the simple, self-reinforcing outcome of PredPol’s algorithm is driven by the approach to machine learning that it takes. Per the most recent data available, PredPol uses supervised machine learning. This basically means that you feed a system data, the system makes an action which outputs data, and the system acts again based on its output.

According to Venkatasubramanian, a better approach in a criminological context would be to use reinforced machine learning, Simply put, this is when a machine tries to build a rule that provides the correct answer to a given question.“In reinforcement learning, you recognize the fact that your actions can affect the outcome, and that you have a limited feedback,” Venkatasubramanian said.

Unless every single crime is reported, and unless and police pursue all types of crimes committed by all people equally, it’s impossible to have a reinforcement learning system that predicts crime itself. Instead, you’ll just creating a self-fulfilling prophecy where police find crimes in the same places they’ve been told to look for them, rather than everywhere.

This illustrates a challenge that comes along with the privatization of policing: academics can study PredPol from the outside, using data that’s publicly available when companies decide to make it available. But current internal data is often concealed, and this move is justified as protecting company secrets. Because of this, the public gets left in the dark, and people of color are left at disproportionate risk.

The 2016 study co-authored by Lum and William Isaac found that people of color were twice as likely to be targeted compared to white people, even though estimated drug use between black and white individuals is approximately equivalent. In the study, they used a year's worth of crime arrests in Oakland, CA between 2010 and 2011 and PredPol’s formula for police allocation, as it understood from the 2015 Mohler paper "Randomized Controlled Field Trials of Predictive Policing.”

According to PredPol documents obtained by Motherboard, PredPol cannot “predict” drug use or possession—the type of crime that Lum and Isaac studied in their paper. But PredPol does claim to “predict” misdemeanors.

Across the US, black people were 3.49 times more likely to be shot by police compared to white people across police shootings between 2011 and 2015, according to a statistical analysis of the US Police-Shooting Database. And police shootings happen constantly. According to a VICE News investigation, officers from the 50 largest police departments in the country have shot, on average, more than 500 people per year between 2010 and 2016.

At its core, PredPol is a tool for that aids law enforcement as it currently exists, and around the country, law enforcement targets people of color and puts them at disproportionate risk of harm.