Predictive Policing

Predictive policing, as defined by The National Institute of Justice, is policing that “tries to harness the power of information, geospatial technologies and evidence-based intervention models to reduce crime and improve public safety.” Predictive policing is not meant to replace traditional policing but is meant to aid it by applying advanced analytics to data sets, created from data of past crimes, to help focus law enforcement and allow them to react preemptively to what is likely to happen and to deploy resources accordingly. While meant to be more accurate and efficient, predictive policing in practice only serves to exacerbate biases in the system and is usually opaque with regards to what data fuels these algorithms, presenting dangers to our society and communities these policies are implemented in.

 

There are two types of predictive policing technologies: place-based and person-based. Though different in practice, theory and design, both can be problematic to the communities that are targeted. 

 

Place-based predictive policing policies, such as HunchLab, predict the sites of criminal activity based on historical crime data, in addition to other factors. HunchLab uses historical crime data, in addition to temporal patterns (day of the week, seasonality), weather, risk terrain modeling (locations of bars, bus stops, etc.), socioeconomic indicators, historic crime levels, and near-repeat patterns. 

 

The inherent problem with place-based policing technologies is the data that is used to create these models. Historical crime data may be based on practices that have disproportionately targeted those who live in low-income areas and minority communities. If the police have discriminated in the past, then these new technologies will only serve to intensify the problem, sending more officers to areas that are already targeted and unfairly treated.

 

However, in response to concern with regards to over-policing, HunchLab may re-weigh the severity of crimes in their models to avoid aggressive and heedless policing. The company also specifically adjusts its model to avoid increasing racial tensions through unnecessary contact. Not all systems on the market do this, and some impacts may still be discriminatory, HunchLab does attempt to avoid this problem.

 

The other, and more concerning form, of predictive policing, is person-based. These systems, a prominent example being the Chicago Police Department’s Strategic Subjects List (SSL) or “heat list,” attempt to predict who is likely to commit or be a victim of certain crimes. The SSL predicted city residents who were most likely to be involved in a shooting, either as the perpetrator or victim, and would assign each individual on the list a risk score, which reflected their predicted likelihood of being involved in a shooting.

 

A de-identified SSL dataset – the data that went in about each person and the risk score that came out, but without identifiable information about each individual – obtained by the Chicago Sun-Times through the Freedom of Information Act, consists of 398,684 individuals. More than a third of individuals on the list have never been arrested (133,474), with 88,592 individuals of that group having a score greater than 250 (on a scale of 1-500), and two-thirds of the list have been arrested at least once for any crime (265,210). According to the Sun-Time’s reporting, the list is made up of “everyone who has been arrested and fingerprinted in Chicago since” 2013, but what still remains unclear if there are other ways to end up on the list as 126,687 individuals who were listed have never been arrested, never been a victim of gun violence, and never been party to violence. It further remains unclear how one could lower their score or be removed from the list entirely.

 

Officers were aware of who on their beat is on the list, and were directed not to include an individual’s score in a police report should they be involved in an arrest, making it difficult to track exactly how the police are using the list and to evaluate an officer’s interaction with an individual is influenced by their placement on the list.

 

Person-based predictive policing supports traditional, punitive policing tactics, allowing law enforcement to target certain members of the community who are deemed to be “high risk”. It does so without having to be transparent about how one becomes “high risk” and how that affects their interactions with law enforcement.

 

Anyone who has taken even a brief glimpse of the history of policing in America knows that surveillance technologies tend to be used against communities of color in a disproportionate manner. This reality forces us to take a critical, even cynical, look at any new technology that law enforcement can utilize in order to minimize the risk of replicating the past and reinforcing old biases.

 

Proponents will make the argument that these technologies are malleable, that they can change. They will cite new entrants into this field who have begun to advertise solutions to the problem of racial bias in data or point out that many of the current systems are changing in response to concerns brought up by the public. It is true that these technologies can change. However, why were such glaring problems not addressed earlier? 

 

Policing programs cause ripple effects throughout communities. Any technology, regardless of intent, has to be tested and studied much like a vaccine, before it can be implemented at a large scale or else the consequences could be disastrous. 

 

But even if we test these programs, and make them as unbiased as they realistically could be, that still leaves the larger problem: transparency. Specifically, transparency with regards to what programs are being implemented, what data and logics govern said programs and how these programs will affect how law enforcement police an area. It took the Chicago Sun-Time’s FOIA request then a legal battle to obtain the data that they did, and still, there are unanswered questions about the program

 

Opaque decision-making with little to no accountability is dangerous at best. For the most successful implementation of predictive policing, which is frankly inevitable given the spread of machine learning in different applications, it is imperative that we hold our law-enforcement agencies accountable for their actions and their programs, that we as communities understand how our information is being processed and for us to determine as a collective how much of our freedom we wish to give up for security.