Skip navigation
CLN bookstore
× You have 2 more free articles available this month. Subscribe today.

The Real Minority Report Predictive Policing Algorithms Reflect Racial Bias Present in Corrupted Historic Databases

by Matt Clarke

In Steven Spielberg’s science fiction movie, Minority Report, police armed with nebulously sourced predictions of people who are supposedly going to commit murder before they can actually do so. Troublingly, only a few years after the film’s 2002 release, a real-world version of predictive policing was quietly implemented.

Whereas the movie dealt with the corrupt manipulation of the predictive mechanism by rich white people—there was scarcely a member of a “minority” in its cast—actual policing of predicted future crimes involves computers armed with secret algorithms created by private companies, drawing from numerous sources of historic data to identify patterns and predict where crimes will be committed and who will commit them.

The problem with this is that much of the historic data are corrupted by built-in biases against people of color and poor people. Thus, the computers learn these biases and repeat or even magnify them while giving them a “tech-washed” veneer of respectability and impartiality.

Two Kinds of Predictive Policing

In one kind of predictive policing called “place-based,” algorithms crunch massive amounts of data—the most important of which are usually records of police encounters, calls for service, arrests, and convictions—to determine how likely it is for a specific crime to occur in a specific area over a specified period of time.

The largest vendor of this sort of software, PredPol, allows police users to map crime “hot spots” by day and time. But other factors can be tracked as well—Azavea’s Hunchlab software is an example—including weather patterns, location of nearby businesses, transit, and government facilities, and even phase of the moon or home game schedule.

The predictive results of the place-based algorithm are then used to allocate police resources to locations where greater crime rates are expected. A major problem with this model is reflected in “implicit bias” training programs, which most police departments, although they don’t concede any bias, have been forced to implement for the simple fact that their historical data are corrupted by racially-biased policing—in which poor and ethnically-diverse neighborhoods were flooded with police who enforced laws at a level unseen in affluent, white neighborhoods.

Thus, a computerized analysis of historical policing data leads to the obvious but incorrect conclusion that the crime rate in heavily-policed neighborhoods is many times that of a barely-policed neighborhood. As its critics say, “place-based” predictive policing is aptly named in that it accurately predicts future policing, not future crime.

The other type of predictive policing software is called “person-based,” and it can be even more insidious. Less reminiscent of Minority Report than the surveillance state in George Orwell’s dystopian novel 1984, these computer programs attempt to predict whether a specific person will commit a crime in the future.

Although the algorithms do not track race as a datum, they track other data that can be linked to race such as name, educational and employment history, zip code of residence, and, of course, the racially-biased historical policing data. Alarmingly, these algorithms can also track social media connections, location data from purchases, and automatic license plate readers—even whether a person orders a pizza from Papa John’s or Domino’s.

Person-based predictive policing has serious consequences. Studies have shown these programs to be heavily biased against people of color and immigrants, regardless of the fact that the program may not directly track race or national origin. The risk assessments generated by algorithms in Chicago, for example, ended up being used throughout the criminal justice system to determine the amount of bail to set, whether to grant probation, the length of a sentence, and the suitability for release on parole.

Further, person-based databases sweep up not only prolific offenders but their contacts—some of whom have never been arrested, much less convicted of a crime—leading to questions about infringement on privacy and due process rights. Thus, going beyond the “thought crime” that Orwell chillingly described, these algorithms can result in the detention or imprisonment of people for crimes they have not even thought of.

Place-Based Predictive Policing Algorithms

Of course, attempts at predicting crime are nothing new. Police have long known which parts of town are crime-prone or at least prone to the sorts of crime that police focus on. Scholars have been mapping crime since the 19th century, too. But modern crime tracking has its genesis in the 1980s crime spike in New York City, when police started systemically correlating crimes with locations.

The most notable figure from that period was Jack Maple, a quick-talking salesman of a police officer who started at the bottom rung as a transit cop and worked his way up the ladder until he wore double-breasted suits, highly-polished two-tone shoes, and a homburg hat to work. In 1990, the movers and shakers at the Manhattan headquarters of the New York Police Department (“NYPD”) listened to this near-mythical figure when he covered 55 feet of wall space with butcher paper and showed them what he called his Charts of the Future.

“I mapped every train station in New York City and every train,” Maple once explained to an interviewer. “Then I used crayons to mark every violent crime, robbery, and grand larceny.”

Then-NYPD Commissioner Bill Bratton took note of Maple’s predictions and sent extra patrols to the alleged high-crime areas depicted on the charts. By 1994, Maple’s ideas were digitized and formalized into a computerized police management system called CompStat that was used to hold individual precinct commanders accountable for crime levels in their sectors.

This concept of “hot spot” policing soon spread across the nation’s law enforcement departments in a variety of forms, replacing the previous model of “reactive policing,” used until the 1970s, in which police patrolled a sector while waiting to respond to reports of crimes. Studies had shown reactive policing to be ineffective at reducing crime rates, so police departments were primed for a change.

After the 9/11 attacks, a new emphasis on security supercharged interest in what researcher Jerry Ratcliffe called “intelligence-led policing,” with an emphasis on both predicting crime and pre-empting it. When Bratton became Chief of the Los Angeles Police Department (“LAPD”) in 2002, he encouraged researchers at the University of California-Los Angeles (“UCLA”) to work with police officers to develop a computer algorithm that could recognize patterns in the many years of available historical data on crime types, locations, and times and then use what it found to predict future crime patterns.

The researchers modified models originally used to predict seismic activity, creating an algorithm that uses a sliding-window approach to predict the next day’s crime rates in specified 500- by 500-foot plots. This original place-based algorithm was tested for months in LAPD’s Foothills Division beginning in late 2011, focusing on property crimes. Police in that division claimed property crimes fell by 13% while they increased by 0.4% in the rest of the city.

As an early adopter of computerized predictive policing, LAPD had begun working with federal agencies in 2008 to develop new approaches to predictive policing. Soon, the federal Bureau of Justice Assistance (“BJA”) was providing grants to cities around the country to implement predictive policing programs. This sparked the interest of a number of large companies such as IBM, Hitachi, and LexisNexis, which developed predictive policing algorithms in an attempt to cash in on the federal dollars being handed out to support predictive police research, development, and deployment. Startups focused on predictive police also sprang up.



The Foothills Division’s predictive policing program, dubbed PredPol, spun off into a private company that became one of the leading vendors of predictive policing systems, marketing the algorithm developed at UCLA. PredPol quickly raised $1.3 million in venture capital, and the program was rapidly adopted by more than 50 police agencies in the U.S. and U.K. It is still one of the most widely-used place-based algorithms.

PredPol still divides an area into blocks that are 500 feet by 500 feet, and now uses patterns it detects in historical data about calls for police services, encounters with police, arrests, and convictions to predict the likelihood of a specified crime occurring in the block within the next 12 hours.

LAPD also developed a person-based predictive policing program, called LASER, with a $400,000 grant from BJA in 2009. It focused on predicting gun crimes, but it was discontinued in 2019 after the LAPD’s inspector general released an internal audit report that found significant problems with the program.

On April 21, 2020, LAPD Chief Michel Moore announced the discontinuation of the PredPol program, too. But Moore defended the controversial program despite the LAPD inspector general’s stating a year earlier that he could not determine its effectiveness. Instead, Moore said that pandemic-related budget constraints prevented its continuation.

Regardless of Moore’s statement, community leaders who have long opposed the PredPol program as unfairly targeting Black and Latino communities claimed its discontinuation as a victory over entrenched LAPD racism.

“This is all through the hard work of community folks,” said Hamid Khan of the Stop LAPD Spying Coalition. “This is community power that shows we can dismantle it and stop these egregious tactics.”


Developed by Philadelphia-based startup Azavea, HunchLab is similar to PredPol in that it uses historical police records as input to train its artificial intelligence (“AI”) algorithm to seek out patterns and make predictions. But HunchLab goes even further, using other factors such as population density, census data, day of the month, weather patterns, locations of schools, bars, churches, and transportation hubs, as well as schedules for home ball games and phases of the moon in making its predictions.

This flexibility of input, combined with a lower fee of $45,000 up front and $35,000 annually thereafter—about two-thirds less than other AI tools—were the reasons given for the adoption of HunchLab by Missouri’s Saint Louis County Police Department in 2015.

NYPD Predictive Policing Program

NYPD is the largest police force in the U.S. It began testing predictive policing software in 2012, inviting Azavea, KeyStats, and PredPol to take part in trials. It ultimately rejected them in favor of creating its own in-house algorithms, deployed in 2013. According to an internal NYPD document from 2017, the department developed separate algorithms for shootings, burglaries, felony assaults, grand larcenies, motor vehicle thefts, and robberies, then used them to direct the deployment of extra police officers to crime “hot spots” identified by the algorithms.

NYPD is secretive about the specifics of its predictive policing program. It describes the data fed into the algorithms as major crime complaints, shootings, and 911 calls for shots fired, but it has refused to disclose the actual raw data sets used in response to a public records request by the Brennan Center. There is heightened concern since NYPD has lost or settled several lawsuits over its racist policing practices.

Problems with Place-Based Predictive Policing

Place-based predictive policing programs have been criticized for a number of problems, including a lack of transparency, an adoption of biases from biased historical datasets, a potential erosion of Fourth Amendment protections against search and seizure, as well as “tech-washing” over existing racial prejudices inherent in the inputted data and thereby blessing it with a veneer of objectivity.

Lack of Transparency

As demonstrated by NYPD’s reluctance to release information about ComStat, one of the chief criticisms about predictive policing is a lack of transparency. Most predictive policing software has been developed by private corporations that claim their algorithms and/or the databases used by the algorithms are trade secrets, refusing to reveal them.

This is a serious problem for several reasons. One is a lack of oversight. The government has already shown itself to be extraordinarily inept at regulating or even keeping up with the rapid pace of digital technology.

This is especially true of the judicial system. As one prominent Texas attorney once told this writer during a discussion about then-new DNA testing technology, “Lawyers can’t do science. If lawyers could do science, they would be doctors.” Sadly, this has proven to be true, and since judges and most lawmakers are lawyers, they have proven to be quite inept at “doing” science. As retired federal judge Richard A. Posner once put it, “Federal judges are on the whole not well adapted by training or experience to the technological age that we live in.”

However poor the government might be at overseeing digital technology, it cannot provide any oversight at all of what is hidden behind claims of trade secrets. That is why it is important for legislation to be enacted requiring companies to provide algorithms used in predictive policing—no less so than in other parts of the criminal justice system—to reveal their source codes and databases so that they can be checked by third-parties for biases and errors.

Another problem obfuscating predictive policing algorithms lies in the nature of machine learning or AI, as it is also known. AI algorithms like those used in predictive policing are designed to recognize patterns in their databases and then program themselves to make extrapolations based on the information in the databases. This means that even those who developed the AI algorithm do not fully know how it has continued to self-program. As these self-adjustments render the program more and more complex, even its original developers no longer understand it completely.

“As machine learning algorithms are exposed to more data, they autonomously become more ‘context specific and often based on thousands or millions of factors’ in a manner that is indecipherable to human programmers,” according to one researcher, data scientist Orlando Torres, who calls these new types of algorithms “WMDs”—weapons of math destruction.

This problem is exacerbated by the fact that AI programs have been shown to adopt the biases present in their databases. A study by Joanna Bryson, a researcher at the University of Bath in the U.K., used Implicit Association Tests to show that commonly used off-the-shelf AI programs pick up stereotypical cultural biases just by being exposed to a standard body of text from the Web. Thus, a bundle of names typical among European-Americans was significantly more associated with pleasant than unpleasant terms compared to a bundle of African-American names, and female names were more associated with family than career words compared to male names.

A business consequence of this type of bias-learning occurred when Microsoft rolled out its AI chatbot, Tay, in 2016. Users of the Internet message board 4chan launched a coordinated effort to flood Tay with misogynistic and otherwise offensive tweets. This became part of the data Tay used to train itself, and within a day, Microsoft had to pull the program because it was generating offensive tweets.

A similar database-driven mis-training of an AI occurred when Google launched Google Flu Trends, an AI designed to predict influenza outbreaks. There was some early success, but Flu Trends completely missed the pandemic influenza A-H1N1 in 2009 and consistently over-predicted flu cases between 2011 and 2014.

The culprit, it turned out, was Google itself. As its popular search engine recommended flu-related search terms as possibilities to people who did not have the flu, it seeded its own data with excess flu-related queries—which Flu Trends then used to predict cases of the flu that did not exist.

If algorithmic predictions based on corrupt data are themselves corrupt, a similar situation exists when predictive policing algorithms use historical police data in which lower-income communities and people of color have been over-represented. Those same communities then will also be over-represented in the algorithms’ predictions of future crime.

Garbage In, Garbage Out

Computer scientists often say “GIGO” when confronted with corrupted data. This is an acronym for “garbage in, garbage out.” In other words, you cannot get good results from bad data. That is the reason it is especially important to make the data used by predictive policing algorithms available for public inspection.

With the widespread introduction of implicit bias training, some police departments have begun to tacitly admit that their officers have historically engaged in racially discriminatory policing. That means that their historical crime data suffer the same bias—and so will any effort at predictive policing that relies upon it.

Without transparency and oversight, this merely leads to a feedback loop predicting ever more crime in those areas that have been over-policed, giving the tech-washed appearance of impartiality to data that actually reflect past racial bias.

Racist Bias in Policing Databases

Most place-related predictive policing software programs lean heavily on historical information from police databases, such as calls for service, reports of gunshots, encounters with police, and arrests. There are several problems with these data, but the biggest one is the historic racial prejudice embedded in them. Even if the algorithms do not use race as an overt datum, the fact that policing of different communities varies greatly still injects racism into the AI’s pattern-learning as people in low-income, urban, Black neighborhoods are much more likely to have encounters with police than affluent whites in the suburbs.

“Information about criminal records, for example, is heavily racially and socio-economically skewed from decades of police practices targeting specific types of crime that correlate with specific neighborhoods, which in turn correlate with minority communities to create a feedback loop of accumulating crime data against poor people of color,” according to legal researcher Namrata Kakade, in her 2020 report published in the George Washington Law Review titled, “Sloshing Through the Factbound Morass of Reasonableness.”

Two other researchers, Kristian Lum and William Isaac, note that this feedback loop makes the model become “increasingly confident that the locations most likely to experience future criminal activity are exactly the locations [that police] had previously believed to be high in crime: selection bias meets confirmation bias.”

In addition to heavy police presence in poor, minority neighborhoods that results in more interactions with police, certain crimes are more heavily policed against people of color. For instance, although surveys have shown that all racial groups use marijuana at about the same rate, Blacks are arrested for possession of marijuana at a much higher rate than whites. Therefore, AI would erroneously conclude that marijuana crimes are committed in predominately-Black neighborhoods at a much higher rate than in predominately-white neighborhoods and recommend increasing police presence in the predominately-Black neighborhoods.

Because having more officers in a neighborhood means more entries in the police database from which the AI draws its inferences, the program gets stuck in a feedback loop reinforcing the original erroneous conclusion—that Blacks commit more marijuana crimes, for example—and as a result sends more police into the community. Thus, as Lum and Isaac observe, “place-based predictive policing” is a particularly apt name for this type of program: “it is predicting future policing, not future crime.”

Since calling for police assistance is a function of the trust a community has in the police, those communities with less trust are more reluctant to do so. As a result, police data sets are not records of all crimes but rather records of all crimes known to the police.

“Criminal records are thus not a measure of crime, but rather a complex interplay between criminal activity, strategies employed by the police, and community relationships with the police,” as Kakade observes, adding that because algorithms based on them “are perceived to be objective, their discriminatory effects run the real risk of remaining largely hidden and unimpeachable.”

Objectively Measuring Bias in Police Data Sets

The only predictive policing algorithm whose source code and data sets have been made public is PredPol, the place-based algorithm developed by UCLA researchers working with the LAPD. Researchers Lum and Isaac—she is lead statistician at the Human Rights Data Analysis Group, while he is Senior Research Scientist on DeepMind’s Ethics and Society Team—took advantage of this open-source data to test PredPol on a synthetic population representing Oakland, California, for a report they issued in October 2016.

The synthetic population was a demographically accurate representation of the actual population of Oakland in 2011, using data from the U.S. Census to label individuals by age, household income, location, and race. Because police databases are known to be incomplete records of crime and corrupted with racial bias, the researchers used data from the 2011 National Survey of Drug Use and Health combined with the synthetic population to obtain high-resolution estimates of drug use that were higher quality and more accurate than estimates based on police databases.

Comparing the researchers’ results with data from police databases showed that the police databases underestimated the actual amount of drug use while also reflecting a dramatically different pattern of drug use according to location.

Using the police databases, the PredPol algorithm produced results that falsely showed higher rates of drug use in West Oakland and along International Boulevard—areas with large Black populations that had historically been subjected to over-policing and thus were over-represented in historic police data. Blind to this inherent bias, however, PredPol recommended targeting those areas with additional police, further reinforcing the bias in the police data. This meant that, for drug-crime-related encounters with police, the rate for Blacks was twice that of whites and for Hispanics it was l.5 times that of whites—despite survey data showing that drug use was about the same for all three groups.

Specifically, about 16% of synthetic population whites used drugs, about 15% of Blacks, and about 14% of Hispanics. Yet only around 5% of whites had drug-related police encounters, compared with about 10% of Blacks and 7.5% of Hispanics.

These data call into question an implicit assumption of targeted policing: that the presence of additional police officers does not change the number of reported crimes in a given location. Testing this assumption, the researchers added 20% to the crime rate for certain areas that PredPol assigned additional police. This should have resulted only in a shift of the predicted crimes—one that followed the same trajectory as the predictions made before the additional crimes were added but at a higher level.

Instead, the algorithm slipped into a feedback loop, using the initially higher observed crime rate to predict ever higher probabilities of crime occurring in the targeted areas over the ensuing year. Eventually, it deviated from the baseline to raise the crime rate in the target neighborhood from just over 25% to more than 70%—over twice the crime rate that the researchers input into the program for the synthetic neighborhood.

PredPol has been described by its founders as “race-neutral,” using only three data points to make its predictions: past type of crime, place of crime, and time of crime. Nonetheless, the AI was able to come up with consistently racist results when tested with drug crime statistics in this synthetic Oakland neighborhood. Even though race is not an item of data drawn from historic police databases and fed into the algorithm, the researchers showed that the racist taint cannot be removed from police data sources. Predictive policing using PredPol in a real neighborhood that is poor and majority-minority—like the synthetic one—would also result in biased over-policing.

That conclusion falls into line with the results of a 2019 study by the U.K.’s Government Centre for Data Ethics and Innovation, which found that just by identifying an area as a crime hot spot, a police department primed its officers to anticipate trouble while on patrol, making them more likely to arrest people in the area due to their bias rather than necessity. The result was that after an area was designated a crime hot spot, there were more reports of crime generated in police statistics—even though the actual amount of crime remained constant.

Since reported crime is the only crime the police know about and the only thing that crime rates are based on, designating an area a crime hot spot causes the crime rate in that area to rise, reinforcing the “hot spot” designation regardless of whether it is true or not.

On November 26, 2020, the 18-member United Nations Committee on the Elimination of Racial Discrimination held a meeting to report its findings after two years of researching the consequences of using AI in predictive policing. While acknowledging that AI could increase effectiveness in some areas, the influential committee issued a warning that using this technology could be counterproductive, causing communities exposed to discriminatory policing to lose trust in the police and become less cooperative.

“Big data and tools may reproduce and reinforce already existing biases and lead to even more discriminatory practices,” said Dr. Verene Shepherd, who led the committee’s discussion on drafting its findings and recommendations. “Machines can be wrong. They have been proven to be wrong, so we are deeply concerned about the discriminatory outcome of algorithmic profiling in law enforcement.”

Other Errors and Biases in Police Databases

In one notorious example of GIGO, the LAPD misreported a staggering 14,000 serious assaults as mere offenses between 2005 and 2012. This wholesale data manipulation—perhaps the result of pressure to improve crime statistics, some officers charged—was not discovered until 2015, long after PredPol was implemented. Whether intentionally corrupted or not, the effect of this bad data on PredPol’s self-programming machine learning and predictions remains unknown.

Similarly, over 100 retired NYPD officers revealed in 2010 that their supervisors and precinct commanders were manipulating crime data entered into NYPD’s former crime database, ComStat, in response to intense pressure to achieve annual crime reductions. The former cops also said that precinct commanders would sometimes drive to a crime scene and attempt to persuade victims to refrain from filing complaints. Other NYPD officers admitted planting drugs or falsifying records to meet arrest quotas. Again, the effect of this intentionally-corrupted data on NYPD’ s predictive policing remains unknown.

A 2019 study looked at 13 police jurisdictions known both to use predictive policing algorithms and to have corrupted historical databases due to racially biased policing, arrest quotas, and data manipulation. The jurisdictions were Baltimore; Boston; Chicago; Ferguson, Missouri; Maricopa County, Arizona; Miami; Milwaukee; New Orleans; New York City; Newark, New Jersey; Philadelphia; Seattle; and Suffolk County, New York. Nine of the jurisdictions admitted using corrupted “dirty data” as input to their predictive policing algorithms. The study’s authors—researchers Rashida Richardson, Jason M. Shultz, and Kate Crawford—concluded that no data from a police department with historically biased practices should ever be used in predictive policing.

Even when uncorrupted, though, police databases are not reflective of true crime rates, the researchers added. That’s partly because, as DOJ estimates, less than half of violent crimes and even fewer household crimes are reported to police. Further, police databases virtually ignore white collar crime, which occurs at a much higher rate than violent crime.

Erosion of Fourth Amendment Rights

Place-based predictive policing has been criticized for leading to an erosion of the Fourth Amendment’s protection from unlawful search and seizure. There is good reason for this concern.

Long before the advent of predictive policing algorithms, in Terry v. Ohio, 392 U.S. 1 (1968), the U.S. Supreme Court held that police need only “reasonable suspicion” to justify stopping a person and frisking a person. And the Court added that the designation of a location as a “high crime area” is sufficient to constitute such reasonable suspicion.

In Illinois v. Wardlow, 528 U.S. 119 (2000), the Court reiterated that a “high crime area” designation may be used to determine whether there is “reasonable suspicion” sufficient to justify a “stop and frisk.” Although Wardlow involved both a high crime area and the suspect’s unprovoked flight upon seeing the police, many scholars believe it paved the way for computerized predictions of high crime rates in a location to be used to justify Terry stops.

Currently, algorithmic crime forecasts do not, by themselves, meet the threshold of reasonable suspicion required by Terry. However, the Terry requirements were court-imposed and could be easily changed by another Supreme Court decision. Further, the current Supreme Court has a very different makeup from that of the Court that heard Terry argued in 1967, opening the possibility of revision of the Terry standards. This could lead to police sweeps of “high crime areas” with officers stopping, frisking, and pat-searching everyone found within the area based upon algorithmic predictions of future crime.

Tech-Washing Racial Biases

Although predictive policing algorithms have been touted by their vendors and police as helping to eliminate racial bias from policing, we can now see that they may actually amplify the racial bias built into historical policing data. Because that bias is introduced into the self-programming AI computer algorithm, it may be overlooked by the creators of the algorithm, who claim—and probably would even testify—that their programming is bias-free. Judges, who are seldom known for their technological prowess, may then rule that the programs are without bias even when the data show they are biased.

This is a way of using data to make racial bias seem nonexistent in technology, increasing its acceptability. Often called “tech-washing,” it is an important issue when dealing with person-based predictive policing.

Person-Based Predictive Policing

With person-based predictive policing, the risk posed to an individual’s civil rights is even greater than that posed by predictive policing of the place-based kind.

The public has already been conditioned to accept that some people need to be punished for their potential to commit crimes in the future. We see this in “risk assessments” that influence decisions on bail, sentencing, and parole. In death penalty states, life or death is determined by a jury’s assessment of “future dangerousness.”

Even the registration of sex offenders, which is mandatory in all 50 states and the District of Columbia, carries criminal penalties for noncompliance based on a registrant’s supposed dangerousness and likelihood of reoffending. Its justification is that even a small risk that those people who have been convicted of sex offenses (as well as certain drug or violent crimes in some states) might commit crimes in the future makes it necessary for their identities and personal information to be publicly available so that their neighbors can keep an eye on them.

These uses of person-based crime prediction are beyond dystopian for the people caught in their web, yet they pale in comparison to the combination of rapid digital processors, predictive AI algorithms, and big data currently being used to track people and predict their future behavior.

The input for these algorithms can come from a variety of sources. More frighteningly, it can also encompass data about people who have never committed a crime or even had an encounter with police.

Because the algorithms are seeking to detect patterns, specifically connections between people, they have to include information not just about those who have been convicted of a crime or even arrested for one, but also others they know—who may be entirely innocent of any crime.

This includes information from social media, mobile telephones, automatic license plate readers, surveillance cameras in public spaces—including police body cameras—employing facial recognition, and even online purchases.

Ultimately, some scholars believe person-based predictive policing will become so widely accepted that it will become the basis for the type of “preventative detention” seen in Spielberg’s Minority Report. Although many Americans can scarcely believe that this could happen here, there are already forms of preventative detention that have been used for decades.

Ignoring the fact that sex offenders have a documented lower recidivism rate than other violent criminals, the 20 states allowing civil commitment for sex offenses lock these people into prison-like settings not for the crime they did and served time for but because they might possibly commit the crime again in the future.

As in the old Soviet Union, those who have been committed are locked up for “treatment,” which rarely happens for any “psychological condition” recognized by the standard psychiatric diagnostic manual, and they are not released until, at some arbitrary point in the far future, the government says so.

This goes beyond the punishment for a “thought crime” that Orwell imagined in 1984. Now, we are punishing crimes that haven’t even been thought of yet. And the public is primed to accept preventative detention even when it is based on pseudoscience.

Likewise, the public has learned to accept the lengthy detention of people the government thinks might be terrorists but has insufficient evidence to convict or even criminally charge. Some of the detainees in the U.S. Navy Base at Guantanamo Bay, Cuba—known as “Gitmo”—have spent nearly two decades enduring harsh prison conditions without ever being tried for the crimes of terrorism they are suspected of committing. Often, the evidence against them is allegedly based on rumors and suppositions or, even worse, lies told to U.S. forces by their enemies.

Gitmo was set up on foreign soil because the government knew this kind of preventive detention would violate the U.S. Constitution. But that was in 2002. In 2021, it is not so certain that the public or the Supreme Court would denounce Gitmo were it located inside the U.S.

Civil commitment of sex offenders and decades-long detentions at Gitmo may not have been intended to lower the public’s threshold of tolerance for police intrusion into the private sphere, having been instituted before the confluence of high-speed computer processors, along with virtually unlimited data storage, AI algorithms, and big data—all of which have been manifested in person-based predictive policing. But with the fear generated by the 9/11 attacks, they certainly had that effect.

Chicago’s ‘Strategic Subjects List’

Before the program was abandoned in November 2019, the Chicago Police Department (“CPD”) spent eight years gathering information on the city’s residents in an attempt to use AI algorithms to predict which people would commit, or become the victims of, violent crime—all of whom were compiled into a “Strategic Subjects List” (“SSL”), which officers on patrol could use during street encounters.

The SSL had its origins in the 2000s, when the CPD started experimenting with predictive policing in an attempt to improve its “hot spot” policing model. As the focus of predictions drifted from places to the people committing crimes, the CPD became one of the first large police departments to receive a federal grant to explore person-based predictive policing.

Inspired by the theory of Yale University’s Andrew Papachristos—who believed violence could be tracked as it spread throughout networks of associated people using epidemiological models developed to track the spread of disease—researchers at the Illinois Institute of Technology developed the SSL algorithm in 2013. It ranked its subjects according to their risk of being involved in violence. To do so, it used criminal histories—with emphasis on drug or gun crimes—and also looked at who had suffered a gunshot wound or been affiliated with a street gang.

After having his research cited as the inspiration for SSL, as well as Palantir’s person-based predictive policing algorithm (elaborated on below) and even PredPol, Papchristos tried to distance his research from them.

But the CPD was proud of its tool, and ranking officers often cited a suspect’s SSL “score” when talking to the press about a high-profile crime. Its use became integral to the CPD’s daily operations.

SSL information was shared with other law enforcement agencies and even clergy and community leaders, who accompanied police on door-to-door visits to some of the people with the highest scores to warn them about the dangers of a criminal lifestyle and offer social services to steer them from trouble. Soon, though, the emphasis on social services waned, and threats of enhanced incarceration and continuous police scrutiny took its place.

Two things led the CPD to abandon the SSL. First, a 2019 report by the Rand Corporation concluded that it was ineffective at reducing violent crime and had created a “level of public fear” that overshadowed the program’s goal of reducing gun violence. The report also noted that the SSL “tarnished shooting victims” and improperly used arrest data so that people with recent arrests were subjected to “additional scrutiny and punishment” even if they were acquitted or never indicted. Second, and perhaps more important to the CPD, the $4 million federal grant used to create and maintain the SSL ran out.

The amount of information the SSL kept on people who had no known connection with crime was staggering. Fully 56 percent of Black men over 30 years old in Chicago had a high SSL score, according to the 2019 study by Richardson, Shultz, and Crawford, who noted that “[i]ndependent analysis by Upturn and the New York Times found that more than one third of individuals on this [SSL] list have never been arrested or the victim of a crime, and almost seventy percent of the cohort received a high-risk score.”

After the program was terminated, the CPD’s Inspector General released a report highly critical of the SSL. The report noted that the SSL used arrest data without checking to see if the arrestee had been convicted, that the CPD had not properly trained its officers in the use of the SSL, and that the department had shared SSL data with the Cook County sheriff’s and state attorney’s offices—as well as the Chicago mayor’s office—without providing any guidelines on how to use it.

The abandonment of the SSL program does not mean that the CPD has given up on high tech, however. Instead, according to CPD spokesman Anthony Guglielmi, there has been a shift in strategy toward building police technology centers. Helped by a $10 million private donation from hedge fund billionaire Ken Griffin, all but one of the city’s 22 police districts had a new Strategic Decision Support Center (“SDSC”) as of June 2020. Each includes computer programs and specialized tools such as gunshot-detection technology, along with an extensive network of surveillance cameras that police can use for real-time tracking.


Co-founded in 2004 by Alexander Karp and PayPal co-founder Peter Thiel, using seed money from the CIA’s venture capital firm, Palantir Technologies rapidly rose to become one of Silicon Valley’s most valuable private companies before going public in October 2020.

Initially, the Palo Alto firm worked for the Pentagon and U.S. intelligence services, developing wartime risk-assessment programs for deployment in Afghanistan and Iraq, where they were unencumbered by minor inconveniences such as civil rights and an inquiring press. Palantir also created predictive analytics to help businesses develop consumer markets and increase the efficiency of investments.

Soon, the flood of federal funding for predictive policing algorithms attracted Palantir’s attention. The first public record of its entry into predictive policing is when its representatives approached the City of New Orleans in 2012.

That year, Democratic power broker and paid Palantir consultant James Carville flew to New Orleans, accompanied by Palantir CEO Alex Karp. They met with then-New Orleans Mayor Mitch Landrieu (D) to set up what the company describes as a “pro bono and philanthropic” relationship to do “network analysis” for law enforcement and other city departments.

One of the other considerations Palantir got in return was free access to the city’s LexisNexis Accurint account. Accurint is a database containing millions of searchable public records, court filings, licenses, addresses, phone numbers, and social media data covering the entire nation, information critical to expanding Palantir to other cities.

Palantir publicly listed its work in New Orleans on its 2015 annual philanthropic report, but a cynical view might be that Landrieu agreed to allow Palantir to use his city as a laboratory for secretly beta-testing the company’s newly-developed predictive policing software.

In 2014, Carville and his political-consultant wife, Mary Matalin, appeared on KQED’s Forum talk show and discussed Palantir’s New Orleans venture, though without revealing Carville’s professional connection to the company.

“The CEO of the company called Palantir—the CEO, a guy named Alex Karp—said that they wanted to do some charitable work, and what’d I think?” Carville told the host of Forum, Michael Krasny. “I said, we have a really horrific crime rate in New Orleans. And so he came down and met with our mayor (and) they both had the same reaction as to the utter immorality of young people killing other young people and society not doing anything about it. And we were able to, at no cost to the city, start integrating data and predict and intervene as to where these conflicts were going to be arising. We’ve seen probably a third of a reduction in our murder rate since this project started.”

When Krasny asked whether the program could lead to the arrest of innocent people, Matalin minimized that concern.

“We’re kind of a prototype,” admitted Matalin. “Unless you’re the cousin of some drug dealer that went bad, you’re going to be okay.”

But in the event Matalin is unaware, innocent cousins of drug dealers have civil rights too. Further, the kind of information Palantir accesses could be considered an invasion of privacy and not just the privacy of drug dealers’ innocent cousins.

In addition to criminal history data, Palantir uses information it scrapes from social media, field interview (“FI”) cards, and jailhouse telephone calls to establish a social network of people associated with, for instance, the victim of a shooting.

The FI cards record any contact between the New Orleans Police Department (“NOPD”) and a member of the public, including those that do not result in an arrest. For years, NOPD Chief Ronald Serpas, who served from 2010 through August 2014, mandated the collection of FI cards as a measure of officer and district performance, generating about 35,000 cards each year.

Instituted specifically to gather as much intelligence on the city’s residents as possible, regardless of whether they were suspected of criminal activity or not, the practice was akin to the NYPD’s infamous “stop-and-frisk” program.

As Serpas explained, the program was initially run by a former CIA intelligence analyst, Jeff Asher, who joined the NOPD in 2013 and stayed two years. Asher would run social network analyses to find people associated with a shooting victim using social media and FI cards. During a 2014 internal Palantir conference, he claimed to be able to successfully “identify 30-40% of [future] shooting victims.”

“This data analysis brings up names and connections between people on FIs, on traffic stops, on victims of reports, reporting victims of crimes together, whatever the case may be,” boasted Serpas. “That kind of information is valuable for anybody who’s doing an investigation.”

The NOPD and Palantir came up with a list of about 3,900 people who were at the highest risk of being involved in a shooting due to associations with victims or perpetrators of previous shootings. It then implemented the “carrot” side of its “NOLA for Life” program. Between October 2012 and March 2017, the NOPD summoned 308 of the people on the list to “call-in” meetings in which they were told that it had been determined they were at high risk of engaging in a criminal lifestyle and offered them help in vocational training, education, and referrals for jobs. Of those 308, seven completed vocational training, nine completed a “paid work experience,” 32 were employed at some time based on a referral, and none completed high school or a GED course. However, 50 were detained, and two have since died.

Call-ins declined as time went on. Between 2012 and 2014, there were eight group call-ins. During the next three years, there were three.

In contrast to the small carrot part of the “NOLA for Life” program, the NOPD wielded a large stick. From November 2012, when it founded a Multi-Agency Gang Unit, until March 2014, 83 alleged gang members belonging to eight gangs were indicted for racketeering, according to an internal Palantir presentation.

Robert Goodman, a former Louisiana state prisoner who became a community activist after his release, was a “responder” for the NOPD’s CeaseFire program until August 2016. The city designed the program to discourage retaliation for violence, employing the “carrot and stick” approach. Yet Goodman also witnessed increasing emphasis on the “stick” part of the program as time went on and control of the resources for the “carrot” part was wrestled away from the community and given to city hall.

“It’s supposed to be ran by people like us instead of the city trying to dictate to us how this thing should look,” said Goodman. “As long as they’re not putting resources into the hoods, nothing will change. You’re just putting on Band-Aids.”

For its initial two years, the Palantir program met with success, and there was a noticeable drop in the number of murders and gun violence in New Orleans. But the effect soon wore off as crime rates climbed.

“When we ended up with nearly nine or ten indictments with close to 100 defendants for federal or state RICO violations of killing people in the community, I think we got a lot of people’s attention in that criminal environment,” said Serpas. “But over time, it must’ve wore off because before I left in August of ‘14, we could see that things were starting to slide.”

Professors Nick Corsaro and Robin Engel of the University of Cincinnati helped New Orleans build its gang database and worked on an evaluation of the city’s CeaseFire program—not knowing Palantir’s role in it. They found that the temporary reduction in homicides in the city did not coincide with the start of the program. Thus, the study could not confirm Palantir’s claim that its data-driven interventions caused the drop-off in violent crime.

When Corsaro was told about the Palantir program and that it was using information from the database he helped create, his reaction was visceral.

“Trying to predict who is going to do what based on last year’s data is just horseshit,” said Corsaro in a 2018 interview.

Such minor considerations as evidence not proving its claim did not keep Palantir from marketing its programs to other U.S. cities, as well as foreign police and intelligence services, with the claim that they had dramatically reduced violent crime in New Orleans.

True to its CIA-backed origins, Palantir is very secretive about its business dealings. However, it is known that the Danish national police and intelligence services signed contracts with Palantir reportedly valued at between $14.8 and $41.4 million. The services Palantir is providing include predictive technology designed to identify potential terrorists using data from CCTV cameras, automatic license plate readers, and police reports.

Denmark had to pass an exemption to the European Union’s data protection regulations to purchase the software. The 2016 contract was the first public mention of Palantir’s predictive technology.

In 2017, the Israeli newspaper Haaretz revealed that nation’s security services were using Palantir analytics to scrape information from social media and other sources to identify West-Bank Palestinians who might launch “lone-wolf” attacks on Israelis. Palantir was one of only two companies used by the Israelis to procure predictive software.

Notably, Palantir’s first publicly-reported use of social media for its social network analysis was in New Orleans.

“I’m not surprised to find out that people are being detained abroad using that information,” said former criminal court judge and New Orleans city council president Jason Williams, who noted the differences between the legal framework of the U.S. and Israel. “My concern is, the use of technology to get around the Constitution—that is not something that I would want to see in the United States.”

But how would you know whether Palantir technology is being used to circumvent the Constitution if you do not even know that it is being used? Because of Palantir’s philanthropic relationship with the city, the fact that NOPD was using its program was not even revealed to the city council for years.

“I don’t think there’s anyone in the council that would say they were aware that this had even occurred because this was not part and parcel of any of our budget allocations or our oversight,” Williams said in 2018.

Likewise, New Orleans defense attorneys were unaware of the Palantir program when questioned about it for a 2018 report in The Verge. They had received hundreds of thousands of pages of discovery documentation from the prosecution in criminal trials, none of which mentioned Palantir.

Although Palantir and its agents had publicly discussed its work in New Orleans prior to 2018, they never gave details or mentioned that the company was using information gleaned from social media to predict who would commit crimes. Instead, they used generalizations and said they were “developing a better understanding of violent crime propensity and designing interventions to protect the city’s most vulnerable populations.”

“It is especially disturbing that this level of intrusive research into the lives of ordinary residents is kept a virtual secret,” said Jim Craig, the director of the Louisiana office of the Roderick and Solange MacArthur Justice Center. “It’s almost as if New Orleans were contracting its own version of the NSA (National Security Agency) to conduct 24/7 surveillance on the lives of its people.”

Craig believes city officials kept the details of the Palantir program secret because revelation of it would have caused public outrage.

“Right now, people are outraged about traffic cameras and have no idea this data-mining program is going on,” he said in 2018. “The South is still a place where people very much value their privacy.”

In late 2013, Palantir approached the CPD in Chicago with an offer to sell it a Social Network Analysis (“SNA”) program based on the one it developed for New Orleans. The SNA program is an AI algorithm driven by machine learning that tries to draw connections among people, places, vehicles, weapons, addresses, social media posts, and other data available in previously separated databases. The user enters a query such as a partial license plate number, nickname, phone number, address, or social media handle and receives in return a potential identification with details about the person and any associates. The CPD negotiated a $3 million price tag but ultimately declined to purchase the Palantir program in favor of developing its own SSL.

Things were different in New York, where public records show the NYPD paid Palantir $3.5 million per year from 2015 until cancelling its contract two years later. But, in keeping with its culture of secrecy, Palantir refused to reveal what the NYPD purchased. The city’s comptroller would not disclose the existence of the Palantir contract for “security reasons.” It only became publicly known after documents were leaked to BuzzFeed.

One of the other programs deployed by the NYPD—which could have Palantir’s fingerprints on it, but didn’t—is “Patternizr,” which allows an investigator to digitally examine police documentation on any crime and let an algorithm search for crimes that are likely committed by the same person(s). Another program begun in 2002 just after the 9/11 attacks tracked every Muslim within a l00-mile radius of the city.

Palantir also has a secretive relationship with the LAPD. The relationship has escaped public scrutiny because it was made through the LA Police Foundation rather than the LAPD itself. It is not known how much Palantir was paid, but it is safe to bet that the amount was in the millions of dollars. Because of its corporate culture of secrecy and the tactic of using police foundations to finance its programs, there is no way to know how many law enforcement jurisdictions are contracting with Palantir.

Judicial and Post-Sentencing Use of Person-Based Predictive Policing

Person-based predictions of the likelihood an arrestee, criminal defendant, prisoner, or parolee will commit a crime in the future have been used for decades to aid in making decisions about granting bail, imposing sentences, granting or revoking parole, as well as civil commitment of sex offenders. A newer development is the use of high-speed processors and AI algorithms to evaluate information provided by the subject and gleaned from the public sphere and police and judicial databases.

Judges typically spend little time on bail and sentencing decisions. They like the black box that spits out risk assessments. And the appellate courts are backing them up. In its 2016 ruling in State v. Loomis, 881 N.W.2d 749 (Wis. 2016), the Wisconsin Supreme Court upheld sentencing partially based upon a computerized risk assessment using the Correctional Offender Management Profiling for Alternative Sanctions (“COMPAS”) program.

The Court rejected Loomis’ claims that COMPAS improperly took gender into account and that he had been denied his rights to an individualized sentence and to be sentenced based on accurate information.

But the Court also ruled that COMPAS risk scores may not be used “to determine whether an offender is [to be] incarcerated” or “to determine the severity of the sentence.” Then what is a sentencing judge supposed to use the risk score for? The court instructed that a pre-sentencing investigation (“PSI”) report that incorporates a COMPAS score must include five warnings:

• that the method of calculating risk scores is an undisclosed trade secret;

• that the scores cannot identify specific individuals who are at high risk as the program relies on group data;

• that the scores are based on national data and no Wisconsin validation study has been performed;

• that studies have “raised questions” about whether COMPAS disproportionately classifies minorities as being at higher risk for recidivism; and

• that COMPAS was specifically developed to assist the prison system in making post-sentencing determinations about assigning prisoners to rehabilitation programs.

Despite all of these issues, the Court upheld the use of COMPAS in sentencing Loomis.

New Jersey’s Bail Reform Law

On January 1, 2017, the New Jersey Criminal Justice Reform Act took effect, ushering in a new bail system based upon the premise that innocent people should not be in jail and everyone charged with a crime is innocent until proven guilty in a court of law. With exceptions for people who are unacceptable flight risks or a danger to their communities, people in jail awaiting trial were given a presumption of release.

To continue detention, a prosecutor is now required to establish that the person is a public danger or unacceptable flight risk in open court, during a hearing, with both sides represented by attorneys, who have discovery rights and the right to call an cross-examine witnesses.

The new system relies on a public-safety assessment (“PSA”) predictive algorithm designed by the Laura and John Arnold Foundation, which is being used in at least 40 jurisdictions. The foundation used a database of more than 1.5 million cases from over 300 jurisdictions to identify the factors that predict whether an arrestee will be arrested for a new violent crime or fail to appear in court. After it was implemented, the pretrial detention rate fell by 35.7%, and the pretrial jail population declined by 20.3% in New Jersey.

The new system includes the creation of a pretrial services agency that tracks those released from jail at a variety of monitoring levels. In 2017, 3,686 people were released from jail on “Pretrial Monitoring Level 3+,” which requires weekly monitoring in person or by phone with the possibility of home detention or electronic monitoring.

Although the New Jersey system has not been tested for a variety of faults, it is indicative of how predictive algorithms can be used to help busy judges make decisions that they usually spend very little time on. If accurate, PSA could greatly increase judicial efficiency, reduce jail crowding, and save taxpayers the cost of needless incarceration. It would also make it easier for an innocent defendant to help defense counsel prepare for trial.

Using Risk Assessment in Sentencing

In September 2019, the Pennsylvania Sentencing Commission formally adopted a scaled-down version of its four-year-old draft risk assessment tool—an algorithm designed to predict the risk that a person will commit a crime or violate probation or parole in the future.

The idea is for low-risk defendants to receive shorter sentences or avoid prison altogether while the sentences of high-risk prisoners could be lengthened. To do this, the tool factors in age, gender, and conviction history. The 2015 draft version, developed after the state legislature passed its authorization in 2010, also included residential location, educational level, drug or alcohol use, and other data—all pieces of information a judge is now encouraged to find if the tool proves unhelpful. But does it give an assessment of an individual or a computer-selected group?

“These instruments aren’t about getting judges to individually analyze life circumstances of a defendant and their particular risk,” said University of Michigan law professor Sonja Starr, a leading opponent of risk assessments. “It’s entirely based on statistical generalizations.”

Attorney Bradley Bridge of the Defender Association of Philadelphia noted that the differences in policing around the state has a dramatic effect on arrests, which will have an equal effect on risk assessment, and introduce racial bias against Blacks in heavily-policed Philadelphia neighborhoods.

“This is a compounding problem,” said Bridge. “Once they’ve been arrested once, they are more likely to be arrested a second or a third time—not because they’ve necessarily done anything more than anyone else has, but because they’ve been arrested once or twice before.”

Further, official records often contain mistakes. This is especially true when jail staff fill out risk assessment forms. This writer had such an experience when he was asked by jail staff how he was employed and replied, “I’m working on my Ph.D. in Chemistry”—after which staff wrote down “unemployed” on the PSI form.

Even when not making blatantly false entries, jail staff may be called on to provide subjective answers to some risk assessment questions that can also introduce error.

“I don’t think they’re all lying, but these guys have figured out the importance of these [assessments] and what can happen as a result,” said an Ohio risk assessment officer who was concerned that some of the defendants might be lying to improve their risk assessments. He also noted that subjective assessments of the subject’s attitude—whether he feels pride in his criminal behavior, is willing to walk away from a fight, or is following the Golden Rule—are the most difficult ones to make accurately.

The Adult Probation and Parole Department in Philadelphia was an early adopter of risk assessment, using a tool developed by University of Pennsylvania statistician Richard Berk, who has been at the national forefront of the push to digitize and modernize risk assessment tools. A control trial of Berk’s system in 2008 showed that parolees predicted to be low-risk who were assigned to less onerous supervision were not arrested at a higher rate than those on stricter supervision. Implementation since then has borne out those results. Berk’s success may have led to the push to use risk assessment in sentencing.

Pennsylvania was the first state to openly use risk assessment in sentencing, and it was soon joined by Arizona, Colorado, Delaware, Kentucky, Louisiana, Oklahoma, Virginia, Washington, and Wisconsin, all of which began supplying sentencing judges risk assessment scores.

Northpointe and COMPAS

Tim Brennan, then a University of Colorado statistics professor, co-founded Northpointe in 1989 with jail supervisor Dave Wells to improve upon the then-leading risk assessment tool, the Canadian-developed Level of Service Inventory. Brennan wanted to address the major theories about the causes of crime in his tool, which uses a set of scores derived from the answers to 137 questions given by a defendant or pulled from his or her criminal history. Some of the questions include whether any family members have been arrested or whether any schoolmates use drugs.

Brennan and Wells called the risk-assessment tool COMPAS (as explained above). They were proud of the fact that it assessed not only risk but also almost two dozen “criminological needs” relating to the major theories of criminality such as “criminal personality,” “social isolation,” substance abuse, and “residence/stability.”

Many states, including New York and Wisconsin, initially adopted COMPAS without rigorously testing its performance. In 2009, Brennan and two colleagues published a validation study claiming an accuracy rate of 68% in a sample of 2,328 New York probations. Then in 2012, New York published its own statistical analysis of COMPAS using almost 14,000 probations and claimed the program had an accuracy rate of 71% at predicting recidivism, though it did not evaluate racial differences.

In 2011, Brennan and Wells sold Northpointe to Constellation Software, a Canadian conglomerate, which renamed the firm Equivant. The following year, the Wisconsin Department of Corrections (“WIDOC”)began using COMPAS. Soon afterwards, some Wisconsin counties started using it for pretrial release decisions. It then slipped into pre-sentencing reports, and judges started considering the COMPAS risk assessment in sentencing. That is what led to the Loomis case.

Unfortunately, the situation may be much worse than the Loomis Court indicated. ProPublica obtained the risk assessment of over 7,000 people arrested in Broward County, Florida, in 2013 and 2014, and then checked to see which of them had actually been charged with another crime compared to the COMPAS prediction. It found that COMPAS had an overall accuracy of 61%, but it was racially skewed, overstating recidivism risk for Blacks while understating it for whites. Just 23.5% of whites labeled high risk did not reoffend, compared to 44.9% of Blacks, while 47.7% of whites labeled low risk did reoffend, compared with 28.0% of Blacks.

Thinking that prior criminal history might have caused the racial bias, ProPublica removed that information and re-ran the COMPAS evaluation. This resulted in Black defendants being 77% more likely to be assessed at high risk of committing a future violent crime and 45% more likely to be assessed at high risk of committing a future crime of any kind.

The problems with COMPAS are typical of risk-assessment tools used by judges. They lack sufficient validation. Further, even if we assume the accuracy levels claimed of COMPAS are correct, would you want a decision that could alter the rest of your life based upon a computer analysis that was only 61% accurate or even 68%?

Would you own a car that got you to your destination only 68% of the time?

When almost one in three risk assessments is wrong, that translates to one in three people getting bail that is too high or too low, a sentence that is too long or too short, or being released or held when they should not be. The question is whether we are willing to accept that high of an error rate in an attempt to improve the efficiency of judges.

The Static-99 Risk Assessment Tools Used in Civil Commitment of Sex Offenders

The Static-99 and the related Static-99R are risk assessment tools originally developed in Canada by McGill University researchers. Their intended use was to determine what rehabilitation programs might be most beneficial to convicted sex offenders. To this end, the researchers used sex offender populations in Canada and Denmark to train the risk assessment tool.

But as Rashida Richardson, director of policy research at the AI Now Institute, points out, Blacks are about 12% of the population of the U.S. but only around 3% of the population of Canada and even less of the Danish population. This immediately raises the question of racial bias in the Static-99s. Several other risk assessment tools being used in the U.S. were developed in Europe, which has both a different racial makeup and a different culture from the U.S.

In the case of the Static-99 and Static-99R, one need not speculate whether the results are skewed by the population used to train the AI algorithm. In State v. Jendusa, 955 N.W.2d 777 (Wis. 2021), the Wisconsin Supreme Court ruled that the raw data used to evaluate sex offenders for civil commitment in Wisconsin was discoverable in sex offender civil commitment proceedings [see CLN, Aug. 2021, p. 42]. In a hearing, WIDOC psychologist Christopher Tyre admitted that the preliminary base rate he calculated using the Wisconsin data was about a third of the base rate of the Canadian and Danish groups used to originally calculate defendant Jendusa’s risk level using the Static-99 and Static-99R.

The base rate was off by a factor of three simply because the groups used were Canadian and Danish, not Wisconsinites. One must wonder how many sex offenders in Wisconsin and other states using the Static-99s—the most widely-used risk assessments tools for sex offenders—have been civilly committed, not because of their risk at reoffending, but because the Canadian and Danish sex offenders’ data were used instead of local data.

Problems with Person-Based Predictive Policing

Like place-based predictive policing, person-based algorithms are criticized for a lack of transparency, various deficiencies in input data resulting in GIGO—especially racial bias in the historic criminal justice records being used to self-program an already racially-biased program—as well as the potential for erosion of Fourth Amendment rights.

All of the previously discussed deficiencies in place-based predictive policing apply to person-based predictive policing. But some of the deficiencies are even worse. These programs also have issues unique to person-based predictions, such as using a non-local population to train the AI that results in biases that may be cultural, rather then racial.

Computerized risk assessment scores have been praised for reducing the jail and prison populations, but is that truly the measure of their success? What makes them better than randomly releasing more people if the measure is the reduction in prisoner population? The question should be whether the right people are being released, and as the ProPublica study of Static-99 showed, that may not be happening.

Lack of Transparency

Although most of the private firms providing predictive policing algorithms are secretive about their work, Palantir has taken secrecy to new heights. We know what the Saint Louis County Police Department paid HunchLab to do even if we do not have the details on how the program works. But with Palantir, we do not know what services the NYPD and the LAPD (through the cover of a foundation) paid Palantir unknown millions of dollars to provide.

In multiple settled lawsuits, the NYPD admitted having a program to track every Muslim in the city for a radius of 100 miles around the city. Was Palantir involved in such unconstitutional activities? Just as the lack of transparency makes it impossible to check the algorithms for racial bias, it renders us incapable of knowing what is actually being done with the technology our governments purchase. When the item is not budgeted, but provided pro bono, like it was in New Orleans, or through a private police foundation, like in Los Angeles, there are no checks and balances and meaningful oversight is virtually impossible. The legislative branch officials may not even know the predictive policing and surveillance program exists, much less what it is being used for.

Function Creep

The use of a program for a purpose it was not designed for is called “function creep.” Ironically, most predictive policing programs were created by function creep when predictive algorithms for seismology, meteorology, or epidemiology were modified for use in predicting crime. The concern is that such modified algorithms may contain biases based on their original purpose.

In the case of epidemiology, this was an intentional use of an unproven academic theory that violent crime spreads like a contagious disease.

Likewise, the COMPAS tool started as a way for prison staff to determine which programs to put a prisoner into, then started being used to influence bail, probation, sentencing, and parole decisions.

        Similarly, the Static-99 and Static-99R tools were designed to determine which rehabilitation programs were best for a convicted sex offender, but they then morphed into being used in parole decisions and sex offender civil commitment.

Function creep should frighten us with the myriad purposes these powerful AI programs could be used for. It would be child’s play to track people for non-criminal characteristics or activities such as practicing a religion, involvement in community or labor organization, being part of a certain racial or ethnic group, or even passing through a “high crime” area, taking part in political protests, patronizing certain businesses, and, yes, voting.

There have already been numerous cases of law enforcement officials misusing police databases to perform background checks on social and business partners. Imagine the power and potential for abuse these surveillance systems put into the hands of a police officer who suspects a spouse is unfaithful.

Racially-Biased Input

Even if race itself is not entered as a datum in a person-based predictive algorithm, racially-biased information can be encoded into the type of laws that were enforced or other information that covertly carries racial information such as name, address, educational and employment history, credit history, musical preferences, and internet purchases.

As previously mentioned, Blacks are prosecuted at a much higher rate than whites for marijuana possession throughout the country despite using marijuana at about the same rate. The paucity of “white collar” crime data in police databases and the historic over-policing of poor and minority neighborhoods also influences algorithms in predictive policing. Errors in police data entries and intentional misinformation will skew the self-programming algorithm as it seeks out patterns in the data. All of this combines to render the results unreliable and biased.

Garbage in Still Means Garbage Out

Person-based predictive policing is especially vulnerable to clerical errors in databases. Because these are often not readily apparent, the subject of the risk assessment will never know that his or her “high risk” score was based on erroneous input, not computer-sifted facts. Because the prediction is personal, it has a much greater effect on an individual than place-based predictions. Likewise, racial and other biases in the historic data have a more pronounced effect on an individual when the prediction is person-based.

Fourth Amendment Considerations

Whereas place-based predictive policing might result in “sweeps” of “high crime areas” —perhaps even with attendant “stop-and-frisk” procedures—person-based predictive policing could result in a wholesale abrogation of civil rights and possibly even preventative detention as the predictions become more widely accepted in the judiciary.

Person-based predictive policing algorithms have also been criticized as a violation of the right to privacy. The algorithms scour social media and use location data from cellphones, public cameras, and automatic license plate readers to track people and map their social interactions regardless of whether they are considered criminal suspects or not. However, it is unknown whether the Supreme Court would agree that this is unconstitutional.

In the 2018 case of Carpenter v. United States,138 S. Ct. 2206 (2018), the Court looked at the Stored Communications Act, 18 U.S.C. 2703d, which provides that the government can obtain up to six days of cell phone location records without a warrant from third parties (e.g. a cellphone service provider) so long as it reasonably believes the information will be “material to an ongoing criminal investigation.”

The Carpenter Court ruled that this was a search over which the Fourth Amendment provides protection absent a warrant. So, the government must have a warrant to require the third party to provide the information. But it was a 5-4 decision, and the composition of the Supreme Court has changed since then. The current Court might allow unlimited cellphone location data collection without a warrant. In fact, two of the Carpenter justices would have eliminated the protections the Fourth Amendment gives to “reasonable expectations of privacy” altogether. Also, the third party can provide the information requested by the government voluntarily if it wishes, with or without the customer’s permission.

One issue that has been poorly addressed by the courts in the digital age is what constitutes a search. In Kyllo v. United States, 533 U.S. 27 (2001), the Court ruled that using thermal imaging to look into the home of a person suspected of growing marijuana was going too far. But most courts have ruled that gathering digital information is not a search. If it is not a search, it is not covered by the Fourth Amendment and does not require a warrant.

Achieving Oversight of Predictive Policing

The judiciary has proven itself unable to keep pace with the rapid development of technology in the criminal justice system. Supreme Court Justice Samuel Alito admitted this in his dissenting opinion in the Carpenter case, stating that “legislation is much preferable … for many reasons, including the enormous complexity of the subject, the need to respond rapidly to changing technology, and the Fourth Amendment’s scope.”


Legislation may be preferable, but legislatures have not been engaged in this issue for the most part. In June 2020, New York City passed the Public Oversight of Surveillance Technology (“POST”) Act, requiring the NYPD to list all of its surveillance technologies and describe how they affect the city’s residents. That is a good start! But the POST Act would never have been passed had it not been for the tireless work of community activists, and the NYPD strenuously opposed its passage.

“We experienced significant backlash from the NYPD, including a nasty PR campaign suggesting that the bill was giving the map of the city to terrorists,” said the AI Institute’s Richardson. “There was no support from the mayor and a hostile city council.”

This emphasizes the importance of community activism in gaining oversight over predictive policing. In the case of the Stop LAPD Spying Coalition, community activism may well have helped lead to the abandonment of PredPol.

Freedom of Information Requests and Lawsuits

Freedom of Information Act (“FOIA”) and Freedom of Information Law (“FOIL”) requests and lawsuits by civil rights activist organizations have met with limited success in piercing the veil of secrecy surrounding predictive policing. The most notable success was a 2017 FOIL case in which the Brennan Center for Justice was able to get a small amount of information about the NYPD’s predictive policing and surveillance programs released to the public. Its greater success may have been in alerting the city’s residents to what the NYPD was doing, possibly leading to the passage of the POST Act.

Of course, defense attorneys could try using discovery to find out how their clients are being assessed by predictive policing algorithms. Hampering that approach is the fact that most defense attorneys are not aware of their clients being assessed by computer programs in the first place.

“The legal profession has been behind the ball on these risk assessment tools,” said Melissa Hamilton, who studies legal issues related to risk assessment tools at the U.K.’s University of Surrey and gives risk-assessment training courses to lawyers who often do not know their clients are being subjected to computerized risk assessment. “If you’re not aware of it, you’re not going to be challenging it.”

Even when challenged or subjected to discovery, judges tend to go along with the claims that the algorithms are trade secrets and that it would be a security threat to release information on them or the input data they use.

University of California, Berkeley, public policy researcher Jennifer Skeem and Christopher Lowenkamp, a social sciences analyst for the Administrative Office of the U.S. Courts in Washington, D.C., published a study in which they examined three different options for reducing racial bias in predictive policing algorithms using a 68,000-person database. The option that achieved the most balanced result was one in which race was explicitly taken into account and Blacks were assigned a higher threshold than whites for being deemed high-risk.

Of course, the idea of programming in leniency for a specific racial group or groups would go against many people’s sense of fairness even if it were intended to address historic injustice. Further, it is prohibited by a variety of laws intended to ensure equal protection.

One legal researcher, Kelly K. Koss, has suggested using the FBI to promulgate standards for predictive policing algorithms and how law enforcement uses them.

“Using the FBI as the gatekeeper for this testing could be particularly advantageous because it would enable private companies developing proprietary algorithms to maintain their trade secrets, while simultaneously ensuring that the quantitative data output from these algorithms is reliable,” Koss wrote.

One problem with that idea is that the FBI has proven itself spectacularly incompetent in the regulation of crime labs—even its own crime lab. A more appropriate agency might be the National Institute of Standards and Technology, a federal body that promulgates technological standards. It would also be a good idea to have a regulatory body outside of law enforcement to police the standards. Law enforcement has shown itself to be too enamored of the promised crime-solving and crime-prevention aspects of predictive policing and too unconcerned about privacy and constitutional rights issues.


In their current form, predictive policing algorithms are seriously flawed, making decisions tainted by historic racial bias and false information in their input data. But they are also here to stay, since over a third of U.S. cities has implemented a predictive policing program, and even cities that have “abandoned” them have gone on to implement other computerized surveillance programs, often in secret. Soon, we can expect the expansion of such digitalized surveillance to include drones with cameras and microphones that observe us 24/7, with police having access to data from commonly used items connected to the “Internet of Things” (or IoT) such as cameras in doorbells cameras, thermostats and vehicles—perhaps even voice-activated connection devices such as Alexa or Siri. Therefore, it is important for people to unite and put pressure on legislative and administrative governmental bodies to place limits on surveillance and predictive policing before the problem irretrievably spirals out of control.

“Eliminating bias is not a technical solution,” said Yeshimabeit Milner, director and co-founder of Data for Black Lives, an organization of computer scientists and activists dedicated to fighting racial bias in technology. “It takes deeper and, honestly, less sexy and more costly policy change.”  


Sources: Science Magazine, Significance (official magazine and website of the Royal Statistical Society (RSS) and the American Statistical Association (ASA));;;;,;;;;;;;;;;;;;;;; “Big Data and the Fourth Amendment: Reducing Overreliance on the Objectivity of Predictive Policing” (Laura Myers, Allen Parrish, and Alexis Williams); “Does “Precrime” Mesh With The Ideals Of U.S. Justice?: Implications For The Future Of Predictive Policing” (Jackson Polansky, Henry E Fradella); “A Bias-Free Predictive Policing Tool?: An Evaluation Of The NYPD’s Patternizr” (Molly Griffard); “Predictive Policing and Reasonable Suspicion” (Andrew GuthrieFerguson); “Nuance, Technology, and the Fourth Amendment: A Response to Predictive Policing and Reasonable Suspicion” (Fabio Arcila, Jr.); “Sloshing Through the Factbound Morass of Reasonableness: Predictive Algorithms, Racialized Policing, and Fourth Amendment Use of Force” (Namrata Kakade); “Leveraging Predictive Policing Algorithms to Restore Fourth Amendment Protections in High-Crime Areas in a Post-Wardlow World” (Kelly K. Koss); “Challenging Racist Predictive Policing Algorithms Under the Equal Protection Clause” (Renata M. O’donnell); “Dirty Data, Bad Predictions: How Civil Rights Violations Impact Police Data, Predictive Policing Systems, and Justice” (Rashida Richardson, Jason M. Shultz and Kate Crawford); “A Byte Out of Crime” (Leslie A. Gordon); “Big Data Surveillance: The Case of Policing” (Gordon); “Big Data Surveillance: The Case of Policing” (Sarah Brayne); “Technologies of Crime Prediction: The Reception of Algorithms in Policing and Criminal Courts” (Sarah Brayne and Angèle Christin)


As a digital subscriber to Criminal Legal News, you can access full text and downloads for this and other premium content.

Subscribe today

Already a subscriber? Login



CLN Subscribe Now Ad 450x600
PLN Subscribe Now Ad 450x450
BCI - 90 Day Campaign - 1 for 1 Match