Skip navigation
The Habeas Citebook: Prosecutorial Misconduct - Header

Biased Algorithms Are Still a Problem

by Michael Dean Thompson

The reduction of biases in criminal justice is an ongoing problem that does not lend itself to easy solutions. Artificial Intelligence (“AI”) may one day be that solution, though Boston University associate professor of law and assistant professor of computing and data sciences Ngozi Okidegbi points out that day is not here yet. The problem, as it is in all of criminal justice, is how pervasive and pernicious the issue of race can be. According to the Bureau of Justice Statistics, for every 100,000 adults within the U.S., there were 1,186 Black incarcerated adults in 2021. Similarly, there were 1,004 American Indians and Alaskan Natives per 100,000 that same year. In contrast, only 221 incarcerated per 100,000 that same year identified as white. Those extraordinary numbers highlight the severity of the problem but say little about underlying causations.

There has long been a call to increase the use of algorithms in all aspects of criminal justice, from policing to parole and everything in between, with the hope that a purely data-driven solution would eliminate human prejudice. Unfortunately, the systems put into place are often “black boxes” that its users have no real understanding of how the systems came to their recommendations. For example, in Flores v. Stanford, 2021 U.S. Dist. LEXIS 185700 (S.D.N.Y. 2021), Northpointe, Inc. filed an application in which they sought a court order preventing materials they produced from being disclosed to an expert witness hired by plaintiffs who had been denied parole.

The New York State Board of Parole (“BOP”) uses Northpointe’s COMPAS to score potential parolees as an aid to board members. However, the plaintiffs, all of whom were juveniles and one of whom was only 13 years old when they were convicted, are concerned that the BOP relies on COMPAS “without knowing how or whether COMPAS considers the diminished culpability of juveniles and the hallmark features of youth.” Northpointe specifically sought to prevent the data set they used to create the scores as well as their general scores and regression models because expert access to the information would “threaten [their] very existence.”

It is impossible to know how a decision might be made without having access to the data used to generate that decision. And, in the case of AI, even knowing the underlying data does not tell how the system came to its conclusions. When black boxes are used to determine where police are placed, to assist a judge on sentencing, and to determine when a prisoner should be given parole, the only means available to see the inequalities of the systemic injustice is through gross measures like the statistics above.

The organization ProPublica found that one system used in many states across the country “guessed” wrong twice as often for Black people than for white people as to whether or not they would reoffend. Nevertheless, criminal justice prior to the advent of algorithmic certainly fared no better.

Professor Okidegbi points to “a three-pronged input problem.” The first issue is that jurisdictions intentionally withhold whether or not they use systems like pre-trial algorithms, and, when they do adopt them, they give very little, if any, consideration to the concerns of the marginalized communities affected by them. The second issue is what Kate Crawford of Microsoft called AI’s “White Guy Problem,” which is the overrepresentation of white men in the creation and training of AI, which can lead to biased data and output. And, finally, even when marginalized communities are given a chance to opine on the use of the tools, it does not tend to change the outcome.

AI, as any other algorithmic approach to criminal justice, is only as good as the data on which it was trained. Yet, even that is not complete. Racial equity, as well as economic equity, in criminal justice requires that all phases of the algorithmic approaches be transparent and open for comment, even from the traditionally voiceless incarcerated persons and the communities to which they belong. Additionally, the conversation should include metrics that compare and contrast how the efforts of the humans in authoritative positions (e.g., parole officers and judges) and the algorithms compare with each other regarding prediction and outcomes. As Okidegbi says, “It means actually accounting for the knowledge from marginalized and politically oppressed communities, and having it inform how the algorithm is constructed.”  

Sources: futurity.org; Criminal Justice Algorithms Still Discriminate  

 

 

Disciplinary Self-Help Litigation Manual - Side
CLN Subscribe Now Ad
PLN Subscribe Now Ad