Police Are Trying to Predict the Future

‘Predictive policing’ uses historic data to forecast where crime will occur, but critics question the accuracy

Credit: Joe Raedle/Getty Images

HHow can policing be improved? Politicians, police stations, activist groups, and the public all have their own ideas on how to meet that challenge. Yet while social media and smartphones have increased the awareness of police brutality, an entirely different technology has been steadily and covertly expanding across public life.

PredPol is just one of a number of predictive policing companies based on artificial intelligence. PredPol’s software analyzes crime data to determine which parts of a city are more likely to see criminal behavior and helping police departments “predict” crime before it happens.

Based on PredPol’s algorithmic recommendations, police patrol certain areas with more vigilance. The company claims that one in 33 Americans are protected by its data, implying that it is used by forces across the U.S.; OneZero asked PredPol to clarify this claim and to comment on this article, but received no response. PredPol uses confidential contracts, making it difficult to find out exactly which cities use the technology, but documents seen by Vice earlier this year show that the software has been used for years by police departments in Los Angeles, Santa Cruz, Seattle, Atlanta, Tacoma, Little Rock, and many others (including, somewhat surprisingly, the University of California, Berkeley.) Secretive contracts mean these systems have largely avoided public scrutiny until recently, yet it’s likely that more of us are coming under the watchful eyes of companies like PredPol, Hunchlab, and others, either as criminal offenders within their systems or via police presence in our neighborhoods.

This type of “smart policing” makes the assumption that using machine learning will increase police efficiency and organizational processing, and is, therefore, key to their mandate to protect and to serve. Andrew Ferguson, a law professor at the University of the District of Columbia, told OneZero what this looks like in practice: “[Each company] is doing the same basic work, which is trying to find inputs that police think correlate with certain risk factors, and trying to either put the police car in the right place at the right time, or trying to identify an individual and intervene before they can act in a violent way, or to identify group crimes and predict and thwart their growth.”

This movement has grown at an exceptional rate over the last 10 years. Ferguson says that at least 60 police departments across the U.S. are using some form of predictive analytics — yet they avoid that actual term because of its controversial reputation. Predictive policing implies a surveillance state and invites Minority Report comparisons. It also seems to rely heavily on the idea that technology is a neutral tool, though technology systems and data collection notoriously amplify the biases and prejudices of their creators.

PredPol uses a place-based approach, which means that it inputs historic crime data with a focus on property-based crimes, such as burglary and trespassing, and analyzes that data algorithmically to predict the likelihood of crimes taking place in the same or neighboring areas. To its credit, Ferguson says, PredPol has acknowledged the problems of data used in person-based policing, which targets groups or individuals and leaves more room for algorithmic bias to play a role. Social science research supports the place-based approach, which shows the virality of property crimes and how certain environmental vulnerabilities can be aided by a police presence in those areas.

That said, the focus on the location of crime perpetuates age-old ideas abstracting what policing actually looks like. “It’s not a terribly revolutionary idea,” Ferguson says. “Police have to patrol somewhere, PredPol says, so this might give them insight into redirecting their patrols. Obviously, though, there are people in these places, and they’re going to come into contact with police who are already looking for criminal activity. The technology, by identifying these hot areas, is changing how police perceive that area. They’re going to be on alert and they might see something as suspicious that is actually regular behavior and that’s going to impact those people.”

“Based on the crime data that these platforms are using, they aren’t predicting where and when crime is occurring — they’re predicting where and when police find and arrest people for those crimes.”

Since New York City banned its controversial stop-and-frisk policies in 2013, the city has turned to data gathering and storage, along with predictive analytics. Ferguson argues this is used to justify maintaining the same invasive enforcement of marginalized communities, only this time they are using data as their rationale. Critics of predictive policing have also made this point as the practice has grown in popularity.

The academic researcher Bilel Benbouzid, author of the Big Data & Society paper “To predict and to manage. Predictive policing in the United States,” refers to this as “the cybernetic imaginary of grounding social harmony in calculations.” In other words, community safety is turned into sets of data that are intended to influence police action and strengthen managerial productivity.

Despite the increasing use of predictive policing, Ferguson says its efficacy is largely unproven scientifically. What is proven is the usefulness of data under the right conditions.

Samuel Sinyangwe is the data scientist with Campaign Zero, an organization devoted to police reform. He stresses that, ultimately, data is just a tool. “It can be used for good, or to make problems worse,” he says. “It’s about how it is used: Is it consistent with goals that are demanded by community members, or is it being used without public input?”

Sinyangwe and other activists argue that we should be using the vast amounts of crime data to scrutinize police activity including violence and malfeasance. As public safety becomes increasingly privatized, we should be conscious of how data gathering can be used for other means.

“Based on the crime data that these platforms and police are using,” Sinyangwe says, “they aren’t predicting where and when crime is occurring — they’re predicting where and when police find and arrest people for those crimes, and that reinforces the existing bias and over-policing of black communities.”

All this data, Sinyangwe stresses, also can and should be used to identify officers with a higher likelihood of engaging in misconduct, and that’s where activists are trying to drive the conversation. Some crime goes under-reported, while crime in certain neighborhoods is over-indexed. So the inputs that these algorithms currently use are flawed to begin with, particularly when that data on police behavior and violence could be put to better use. “If we’re serious about public safety, crime prevention, and community input, then we should be using this technology to those ends,” Sinyangwe says.

For communities already struggling with the police, being faced with technocratic solutions to the wrong problems is not a promising development. Internal data for PredPol and other platforms is mostly confidential on the basis of company property, and there is little transparency about when a police department begins using it — Vice’s Motherboard was only able to report on PredPol’s scale after internal documents were leaked. An audit released earlier this year found that the Los Angeles Police Department’s predictive programs, including PredPol, lacked oversight. Despite attempts at legislation in recent years, the law still appears to be lagging behind the technology. The Law Society, a U.K.-based trade body, released a report this year arguing that police forces in the U.K. require “urgent oversight” in the use of predictive policing and other technologies, and the LAPD case suggests that the situation in the U.S. is similar.

“I tend to be agnostic about the technology itself,” Ferguson says. “This surveillance state we live under also happens to provide us with much more information about police practices and patterns.” The problem, he says, is that funding for these projects often goes to precincts themselves, rather than to various social service agencies that might be better equipped to navigate the data. “If you turned the focus from policing these problems using data to solving these problems using data, it opens up a whole new avenue to addressing our societal ills,” he says.

In the meantime, the platforms PredPol, HunchLab, Axon, and ShotSpotter are all competing to be the U.S. police system’s go-to analytics solution. Axon has given its bodycams away to police for free, aiming to build relationships with departments and offering a suite of services intended to position the company as the data servicer of choice. “You’re seeing this recognition that if you become the company that supports police, you can offer them a whole host of services, and that is the way to ensure you have a viable business model,” Ferguson says.

Public protection and welfare determined by profit margins. These are the companies redefining safety in our communities.

A very sensitive piece of horse flesh. Writer for Polygon, Hazlitt, etc. Grad student.

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store