California Police Using AI Program That Tells Them Where to Patrol, Critics Say It May Just Reinforce Racial Bias

Police Car
Artificial Intelligence used for predictive policing is being used by several law enforcement departments across California—to varying degrees of success. One now in the headlines is called PredPol. iStock

Artificial intelligence designed for predictive policing had been used in about 50 police departments across across the U.S., including 10 in California—with varying degrees of success—The Mercury News reported.

Santa Cruz Chief of Police Andrew Mills told the news outlet that AI systems had been "helpful" in identifying when officers should patrol already-known crime hot spots.

Last month, Motherboard revealed for the first time that the University of California, Berkeley, had been interested in the predictive product, produced by a company called PredPol, as did police agencies in Palo Alto and Merced. Internal files indicated PredPol had "current and near-term deployments" in Santa Cruz, San Francisco, Morgan Hill, Salinas and other areas. Other California contracts were confirmed to have ended.

A UC Berkeley police sergeant confirmed to The Mercury News that PredPol product had been in use for "a couple of years," while Palo Alto police said it had been tested between 2013 and 2015. Police in Los Gatos/Monte Sereno used it between 2012, and last year, law enforcement officials said.

Critics of predictive policing have long argued that such AI could be tainted with racial bias and may lead to low-income communities of color being disproportionately targeted.

Essentially, predictive software is a method of big data analysis that brings up comparisons to science-fiction flick Minority Report, but in reality doesn't really "predict" actual crimes.

Instead, AI-enhanced software scans large-scale datasets to highlight high-risk locations for some criminal activity. Theoretically, officers can then generate reports to identify locations and times when crimes are likely to take place—and place resources into those regions.

Some legal experts have raised concerns that biases built into the real-world criminal justice system—over-policing of one demographic, for example—will affect the data.

"The potential for bias to creep into the deployment of the tools is enormous. Simply put, the devil is in the data," Vincent Southerland, executive director of the Center on Race, Inequality, and the Law at NYU School of Law, wrote for the American Civil Liberties Union last year.

PredPol CEO Brian MacDonald told The Mercury News that the company did not use arrest data and only fed in basic crime information like times, dates and areas to help reduce input bias, but one employee acknowledged "reporting bias" remains an issue. PredPol highlights robbery, break-ins or homicide data, but not drug distribution or sex crimes, MacDonald said.

Numerous digital rights experts have previously blasted the use of secretive policing techniques that are based on data collected by law enforcement departments over the years.

"The root of these problems is in the data. Since predictive policing depends on historical crime data, and crime data is both incomplete and racially skewed (take drug offenses, for example), it seems inescapable that resulting predictions made by policing software will be inaccurate and arbitrary," the ACLU's Ezekiel Edwards wrote in a blog post in August 2016.

The EFF digital rights group has suggested predictive policing "amplifies racial disparities in policing by relying on patterns in police record keeping, not patterns in actual crime." But countering that, MacDonald told The Mercury News this week: "We feel like we're helping. We've seen dramatic crime decreases in some of our cities, in most case by double digits."