by Douglas Ankney
According to a new report from MIT Technology Review, judges are increasingly relying on “criminal risk assessment algorithms.” The algorithms assign recidivism scores to prisoners that estimate the likelihood he or she may reoffend. A lower score means a kinder fate while a higher score leads to a harsher one.
“The logic for using such algorithmic tools is that if you can accurately predict criminal behavior, you can allocate resources accordingly, whether for rehabilitation or for prison sentences,” explained author Karen Hao. “In theory, it also reduces any bias influencing the process because judges are making decisions on the basis of data-driven recommendations and not their gut.”
But Hao points out that these algorithms are trained on historical crime data – data that reflects the poor and minority communities that have been targeted in the past by law enforcement. Defendants from these communities are subject to receiving higher recidivism scores. The algorithms pick out patterns associated with crime, which are statistical correlations — not causations. “If an algorithm found, for example, that low income was correlated with high recidivism, it would leave you none the wiser about whether low income actually caused crime. But this is precisely what risk assessment tools do: they turn correlative insights into causal scoring mechanisms.” (This example would mean a defendant would get a higher score simply for being poor.)
Marbre Stahly-Butts, executive director of law for Black Lives, identified the danger of these new approaches in criminal justice: “Data-driven risk assessment is a way to sanitize and legitimize oppressive systems ... Demands of the community to change the system are being met with an increased use of technology which actually lead to more over-surveillance of minority communities.”
As a digital subscriber to Criminal Legal News, you can access full text and downloads for this and other premium content.
Already a subscriber? Login