Skip navigation
Disciplinary Self-Help Litigation Manual - Header
× You have 2 more free articles available this month. Subscribe today.

Risk Assessment Software: Biased and No Better Than Human Behavior Prediction

by Christopher Zoukis

Risk assessment software is all the rage in criminal justice circles. Programs such as COMPAS — Correctional Offender Management Profiling for Alternative Sanctions — are hailed as the ideal method for answering the all-important question: Will a given individual reoffend?

Algorithms and data, after all, are (supposedly) unbiased, faster, and more accurate ways to make this prediction. At least, that’s the argument. In reality, research is increasingly showing that computerized risk-assessment systems aren’t all they are cracked up to be. In fact, a new report authored by Dartmouth computer science professor Hany Farid and honors computer science student Julia Dressel indicates that untrained humans are better at predicting recidivism than the COMPAS system — even when they are given less information than the computer.

Farid and Dressler also found that the software had high levels of bias.

“People hear words like ‘big data’ and ‘machine learning’ and often assume that these methods are both accurate and unbiased simply because of the amount of data used to build them,” said Dressler, whose senior undergraduate honors thesis is the basis of the report, “The Accuracy, Fairness, and Limits of Predicting Recidivism,” published in the Journal of Science Advances.

Farid and Dressler found that giving untrained experts access to seven details about an individual produced a 67 percent accuracy rate. COMPAS, which uses 137 data points, has a 65 percent accuracy rate. Farid and Dressler said their results show the need to take the introduction of technology into the criminal justice system slowly and carefully.

“Algorithmic tools sound impressive, so people are quick to assume that a tool’s predictions are inherently superior to human predictions,” wrote the report authors. They cautioned: “It’s dangerous if judges assume COMPAS produces accurate predictions, when in reality its accuracy is around 65 percent. Therefore, it’s important to expose when software like COMPAS isn’t performing as we expect. We cannot take any algorithm’s accuracy for granted, especially when the algorithm is being used to make decisions that can have serious consequences in someone’s life.”

Equally problematic is the entrenched bias that is being built into computer solutions such as COMPAS. In 2016, ProPublica published a report that found high levels of racial bias in COMPAS. According to the study, COMPAS produced heightened and false predictions of black recidivism, an inaccurate prediction of white recidivism, and misclassified blacks as having a higher risk for violent offenses.

Northpointe, Inc. (now Equivant), the owner of COMPAS, responded to Farid and Dressler’s report by inexplicably noting that the research further confirms that “COMPAS achieves good predictability.” The company did not address the fact that such “good” predictability was actually worse than that provided by random, untrained individuals who were given much less information than the algorithm. Nor did the company respond to findings of bias in the report.

Farid and Dressler are hopeful that tools like COMPAS could one day prove worthwhile. But that day is not today.

“Algorithms can have racially biased predictions even if race isn’t a feature of the algorithm,” they said. “It could be possible to develop algorithms void of racial bias. However, in this field, as research continues it’s important that we assess the algorithms for bias at every step of the way to ensure the tools are performing as we expect.” 

Source: alternet.org

As a digital subscriber to Criminal Legal News, you can access full text and downloads for this and other premium content.

Subscribe today

Already a subscriber? Login

 

 

PLN Subscribe Now Ad
Advertise Here 4th Ad
CLN Subscribe Now Ad 450x600