by Jayson Hawkins
In the push for criminal justice reform, several ideas have emerged to help fix our broken system.
Many experts have promoted risk assessments as effective tools that could be employed at every level of criminal justice to provide more objective standards. Widespread use of these tools is already making an impact, yet few know exactly what risk assessments are or how they are applied.
Risk assessments are mathematical models that measure the likelihood of a future event based on certain variables. When applied to parole, for example, such a program would weigh the prisoner’s age, prior convictions, and other factors to determine how likely the individual is to re-offend.
The primary target for risk assessments currently is the cash bail system, which incarcerates people who have not been convicted of a crime until their trial if they are unable to pay a bond. Bond was intended by the framers of our Constitution to be an inducement for someone charged with a crime to appear in court, at which time the bond would be refunded. However, contemporary judges routinely set bonds so high that only the very wealthy can pay them. Most people must turn to the services of bail bondsmen, who charge a 10 percent to 15 percent nonrefundable fee for covering the full price of the bond. Still, many Americans cannot afford such services and are left to languish in jail until their case is adjudicated. Using risk assessments would relieve judges of the sole responsibility of making bail decisions and create a more transparent system that provides less biased outcomes.
That is the theory, at least. In practice, risk assessments have produced mixed results. The problem is not that algorithms are biased but that human fallibility is involved in programming and determining how they are used—or, as is sometimes the case, misused.
One of the most popular algorithms, the Arnold Foundation’s Public Safety Assessment, relies on demographics and court records to generate its results. Other tools add a questionnaire conducted by a court official. Either way, the algorithms assign a specific weight to factors like employment, education level and prior arrests. How each variable is weighted affects the outcome, yet risk-assessment companies typically keep this part of their formula under wraps.
The values the algorithm determines for a particular individual are then measured against a known data set to make a prediction.
It is hard to argue with the cold logic of math, but an algorithm’s results will only be as accurate as its inputs. The variables selected and the quality of their information mirror the beliefs of the individuals that choose them, thus allowing human bias to seep into the equation.
Problems also arise because of a gap between what a risk assessment is designed to predict and what the available information actually reflects. Arrest and conviction records seem like obvious choices for criminal justice tools, but such data may reveal more about the behavior of police and judges than the people upon which the model is intended to focus.
How a variable is defined also affects an assessment’s predictive value. Most models consider recidivism to mean one is arrested again within two years of release. A tool developed to measure the likelihood of an individual committing another crime, however, would be poorly served by such a definition. Arrests are commonly made in the absence of a crime, just as many crimes occur without any arrests. Because arrests happen more often in areas where police concentrate their activities, arrest rates tend to be skewed against people in minority and lower-income neighborhoods. This bias is compounded by the fact that once an individual is “in the system” their chances of being re-arrested skyrocket.
Another misapplication problem occurs when assessment tools are used for tasks for which they were not intended. One popular model, COMPAS, was formulated for case management in correctional institutions, but administrators have employed it for sentencing and other purposes. While some may assume one algorithm is as good as another, this is not unlike using a wrench to hammer a nail.
Those who favor risk assessments argue that models can be improved through observation and study to reduce bias. A recent computer simulation involving five years of New York City arrests demonstrated that eliminating human judgment from bail decisions could significantly lessen pretrial incarceration without impacting crime rates and do so in a non-discriminatory fashion.
Another study found that risk assessments could cast more accurate predictions than judges when it came to which defendants would fail to show up in court. Other research revealed that untrained personnel can estimate risks of recidivism almost as well as COMPAS software, but the methodology of this particular study has proven controversial.
Although computer models may do a better job of making criminal justice decisions, there has been no rush to remove human judgment from the process. A 2016 case questioned if it would even be constitutional to do so.
The Wisconsin Supreme Court found in State v. Loomis that the defendant’s right to due process was not violated by the use of COMPAS to generate risk scores that influenced his sentencing; however, the court did recommend standards for future application of assessment tools. First, when models are employed that keep secret their chosen inputs and/or how they are weighted, judges must be warned if an assessment has not been validated by independent research or regularly updated for accuracy. Last, judges must be informed about the possibility of racial bias in the algorithms and the dangers of misapplying models to the situations for which they were not intended.
Every indication points to increased usage of risk assessments in the coming years. Ideally, this will lead to more people getting a fair shake in their dealings with the criminal justice system, though such an outcome is far from certain.
As a digital subscriber to Criminal Legal News, you can access full text and downloads for this and other premium content.
Already a subscriber? Login