Artificial Intelligence Is Racist Yet Computer Algorithms Are Deciding Who Goes to Prison

After a protest years ago—fighting casinos ready to prey on poor Philadelphia neighborhoods—I was arrested, along with a handful of pastors and retirees. After over 24 hours in a jail cell, officials brought us into a room to wait for arraignment—in front of a bail magistrate miles away on a closed circuit television. He would decide if we would be held on cash bail, or released home to await our court dates.

First, the black woman across the cell from ours, clearly in pain, was held up in front of the camera by jail guards. Thirty seconds later the magistrate sent her back to her cell with a cash bail in the thousands. Then, our turn—a group of mostly white women. I could barely see the magistrate on the screen as he spoke, sending us home on our own recognizance, no cash bail, to await our trial.

If I was arraigned today, in dozens of cities and states across the U.S., a “risk assessment algorithm” might have suggested what the magistrate's decision should be. Would I show up for court if released? Would I be arrested again? Trained on thousands of criminal records and weighing anywhere from a handful to dozens of factors, the computer would have spit out a recommendation for the judge to consider—set a bail, send her home with conditions, or release on her own recognizance.

For judges and politicians who oversee criminal justice in our communities, these algorithms seem a lot more evidence-based than the measurably racist recommendations of many bail magistrates and judges, and an important political safety net if they are to try to let more people go home before their trials, while protecting “public safety.” But researchers are deeply questioning these tools—sold as “objective”—now more than ever, with a new study released last week showing that a common risk assessment algorithm is just as accurate as a random person paid a dollar to guess whether or not someone will be arrested again.

AI captcha human trick artificial intelligence A signboard during an Intel event in the Indian city of Bangalore on April 4. MANJUNATH KIRAN/AFP/Getty Images

Just having an intent to use risk assessment to unwind how systems punish people for being black or brown isn’t enough. Studies are showing that algorithms trained on racist data have big error rates for communities of color—especially over-predicting risk of recidivism for black people. And the first independent study looking at how judges actually use these tools is this muscular one by Megan Stevenson, showing that whatever algorithms predict, there is no guarantee judges or bail decision-makers will use their forecasts in a way that consistently reduces incarceration or protects public safety.  

If an algorithm describes me as risky—that I won't show up for court if released, that I might get arrested again, that my child should be taken by Child Protective Services, that I shouldn’t be eligible for a mortgage—shouldn’t I have the right to face my accuser? Factors about me and my family that might raise alarms for “risk” in pretrial systems or in child welfare offices are often correlated with poverty. And factors correlated with “risk” of not showing up for court can be easily ameliorated by providing people with the reminders, transportation, childcare, or other supports they need.

Read more: Artificial intelligence can be sexist and racist just like humans

People judged by these algorithms should get to set the goals of how powerful systems use and don’t use these tools—and the right to directly appeal what they say. We should have the right not just to see and understand what the risk assessment tools say, but to independently audit the results that come from their introduction in the system where they are used. This could mean including independent data scientists, focused on the rights of communities, on robust community advisory boards governing child welfare, criminal justice, or other contexts using algorithmic risk assessment. It could also mean halting the use of these algorithms if they are not producing the results we want.

1_8_Prison_Library_Jim_Crow A guard stands behind bars at the Adjustment Center during a media tour of California's Death Row at San Quentin State Prison in San Quentin, California, December 29, 2015. REUTERS/Stephen Lam/File Photo

If we are admitting that policing and criminal justice has been racist for centuries, then the data is too. These tools can make our jurisdictions brave—but we must monitor, watch, and test how they are used.  We should only use these tools to let far more people out of jail, to keep far more families united, and to set goals for reductions in racial disparities in all systems that use these tools. We’ll be fighting for just that kind of oversight if Philadelphia puts risk assessment into our pretrial decision making.

And no matter what—when a tool says a person is “risky” because of the behavior of thousands of others, cooked through a computer built to obscure individual stories—that person and their allies should have the right to face what that algorithm says and tell their unique, human story. If I’m at risk of not showing up for court, perhaps what I need is to tell my story of housing instability, and find shelter for my family and for me. If I’m at risk of losing my children, I need the right to advocate for the resources I need to keep them together, with my family.  As criminal justice officials and others consider risk assessment as a part of unwinding racist decision-making, now is the time to center the communities impacted by these tools—and their needs, rather than just their risk—in how that urgent change happens.

Hannah Sassaman is the policy director at Media Mobilizing Project and a current Soros Justice Fellow focusing on community oversight of risk assessment algorithms in criminal justice decision making.