Skip navigation
CLN bookstore
× You have 2 more free articles available this month. Subscribe today.

The Clash Between Closed-Source Forensic Tools and the Confrontation Clause

by Anthony W. Accurso

Technology companies and prosecutors are working together to assert the right of the companies to protect their intellectual property in ways that deny criminal defendants their right to challenge the reliability of forensic evidence in criminal proceedings. This assertion of intellectual property rights is done in ways that make it almost, if not entirely, impossible for defense teams to challenge the technologies used to gather or analyze evidence used against defendants during criminal prosecutions.

Worse yet, new technologies are being used as part of every step of criminal proceedings, including stages where courts are less willing to recognize a defendant’s right to challenge them. In other instances, defendants are kept from even knowing a new technology was used as part of the investigation. Or, when defendants are aware and make reasonable efforts to challenge these technologies, companies refuse to produce materials included in court orders, often taking advantage of legal loopholes to avoid consequences for noncompliance.

Without better protections for defendants, some judges will weigh these competing rights in a way that denies justice to defendants.

History and Precedent

The Sixth Amendment to the U.S. Constitution contains a clause, known as the Confrontation Clause, which guarantees a defendant the right “to be confronted with the witnesses against him.”

In its first case involving the Confrontation Clause, the U.S. Supreme Court noted: “The primary object of the constitutional provision in question was to prevent depositions or ex parte affidavits, such as were sometimes admitted in civil cases, being used against the prisoner in lieu of a personal examination and cross-examination of the witness, in which the accused has an opportunity, not only of testing the recollection and sifting the conscience of the witness, but of compelling him to stand face to face with the jury in order that they may look at him, and judge by his demeanor upon the stand and the manner in which he gives his testimony whether he is worthy of belief.” Mattox v. United States, 156 U.S. 237 (1895).

The Court explicitly extended the meaning of “witness” to include any statement of a “testimonial” nature, which includes lab reports and other analyses produced from evidence. In a case where the Commonwealth of Massachusetts sought to admit a lab report, but refused to produce the technician who created it, the Court said: “The text of the [Sixth] Amendment contemplates two classes of witnesses—those against the defendant and those in his favor. The prosecution must produce the former; the defendant may call the latter. Contrary to [the Government’s] assertion, there is not a third category of witnesses, helpful to the prosecution, but somehow immune from confrontation.” Melendez-Diaz v. Massachusetts, 557 U.S. 305 (2009).

While this is an important constitutional right, like many other rights, it is not absolute. For instance, the Supreme Court has allowed for a balancing of rights when it comes to special witnesses, such as sexually abused children. In Maryland v. Craig, 497 U.S. 836 (1990), the Court explained, “[t]he central concern of the Confrontation Clause is to ensure the reliability of the evidence against a criminal defendant by subjecting it to rigorous testing in the context of an adversary proceeding, before the trier of fact. The word ‘confront,’ after all, also means a clashing of forces or ideas, thus carrying with it the notion of adversariness.” Id.

When a new device or technology is used in the investigation of a defendant, and the defendant seeks to obtain information about its inner workings and usage, the qualified right of a defendant to “confront” this testimonial evidence must often be balanced against the right of the owner to protect their intellectual property. This right is established in Article I, Section 8, Clause 8 of the U.S. Constitution—otherwise known as the Intellectual Property Clause.

What has not been decided by the Supreme Court is how courts must balance these competing rights. Many courts have handled this issue like courts do when dealing with similar interests in civil proceedings, viz., requiring the government to produce the relevant materials under a protective order, which prevents the defense or its analysts from widely distributing the produced materials.

Ideally, protective orders would be used frequently, allowing companies to protect their intellectual property while still affording defendants access to whatever they need to challenge the evidence against them. The American Bar Association Criminal Justice Standards Committee endorsed the use of protective orders in criminal proceedings in 2007, saying that the trade secret privilege should be allowed so long as it doesn’t conceal fraud, interfere with a party’s ability to challenge admissibility or reliability, conflict with other evidence standards, “or otherwise work an injustice.”

In practice however, companies will refuse to release materials, even under a protective order. Or courts sometimes raise the standard defendants must meet to obtain relevant materials, forcing them to prove why these materials are necessary—a task that often proves impossible without the materials themselves. And these scenarios assume that the use of such technologies has been disclosed to the defendant in the first place, which is not always the case.

Breathalyzers

The breathalyzer has become commonplace in American with media depictions of police using portable devices during traffic stops to determine if a driver is drunk. According to ZDNet.com, this ubiquity means that “[d]runk driving has its own economy: A multi-billion dollar business for lawyers, state governments, and the breathalyzer manufacturers—all of which have a commercial stake at play.”

Breathalyzers, like the Alcotest 9510 produced by the German company Dräger, use two sensors to determine the amount of alcohol present in a breath sample: (1) an infrared beam that measures how much light goes through the breath and (2) a fuel cell that measures the electrical conductivity of a sample. These readings can vary widely based on the temperature of a person’s breath and the amount of degradation the fuel cell has undergone over time.

Like other new technologies, these tools rely on software in the device to make decisions about how the readings from the sensors are handled, what variables to compensate for, and what reading is ultimately output to law enforcement. If the software used to program the device doesn’t properly adjust for the various factors—or makes other errors—this can lead to an inflated blood-alcohol content (“BAC”) reading.

In 2015, one defendant in the state of Washington successfully petitioned a court to require the government to produce the source code for the software that runs the Alcotest 9510. After Dräger asserted its trade secret privilege, the court issued a protective order allowing Falcon Momot and Robert Walker—software engineers employed by the defense—to review the code on behalf of the defense.

The pair produced a preliminary report that outlined errors in the code that failed to account for relevant variables, meaning that “under some conditions the breathalyzer can return an inflated reading—a result that could also push a person over the legal limit.”

The report also stated that “the unit could record a result even when outside of its operational requirements.” Even though the breathalyzer was too warm, the printed-out results would give no indication the test might be invalid.

The researchers, shocked by the errors they found and the implications for defendants in the multiple states that use the Alcotest 9510, presented their report (not the source code itself) at a conference for defense attorneys.

Dräger found out about this and claimed the researchers violated the protective order. “Pursuant to a protective order, Dräger provided the source code to both of the defense experts in Snohomish County,” said Marion Varec, a spokesperson for the company. “That source code is highly proprietary, and it was important to Dräger that the protective order limit its use to the purposes of the litigation at issue.”

Under threat of litigation, the researchers withdraw, leaving their research unfinished and the final report incomplete. “[Dräger] is trying to interpret the protective order to be something it’s not,” said Jason Lantz, the head defense attorney who hired the researchers. “I believe that interest of Draegers’s [alternate spelling] to protect their bottom line overlaps with the state’s interest to keep juries from hearing this information about the problems,” stated Lantz.

In a 2018 case from Hennepin County, Minnesota, similar issues were found regarding a different breathalyzer product—the Datamaster DMT, made by Intoximeters of St. Louis, Missouri. A defendant obtained a print-out of the 158 samples made by the sensors in the Datamaster device used to measure her BAC. Though the machine had ultimately reported her BAC as 0.160, the print-out showed the sensor reporting a BAC of 0.159878767. This finding was important because, under Minnesota law, the difference between a 90-day suspension of her license and a one-year suspension depended on whether her BAC was 0.16 or greater.

In another case, a defendant tried to obtain the source code for the Datamaster device, but the Minnesota DPS refused to submit the code on the grounds that the agency did not “own, possess, or control” it, making disclosure impossible. One Minnesota district court judge, JaPaul Harris, ruled in September 2019 that the state could not be forced to disclose something they did not possess, i.e., the Datamaster source code.

Intoximeters has also, despite being issued subpoenas under protective orders, exploited a loophole that allows them to avoid releasing the source code of their devices. The company is headquartered in Missouri. Because Missouri has not signed an interstate evidence-sharing agreement with Minnesota, judges in the latter state have no jurisdiction to enforce compliance with subpoenas.

This leaves law enforcement without a way to prosecute these cases. Unable to comply with the court’s order to produce material relating to the BAC analysis, the government can’t rely on the BAC results at trial.

Because of revealed flaws in these devices, judges in Massachusetts and New Jersey have invalidated 30,000 breath tests in 2019 alone. A 2021 case in Massachusetts called into question every breathalyzer test performed in the state between 2011 and 2018. Some of these errors related to software calibration of the devices and others because of design failures. None of these flaws would have been discovered had judges not strictly enforced defendants’ right to confrontation.

However, the National Safety Council Committee on Alcohol and Other Drugs issued a statement in February 2009 asserting that analysis of a breathalyzer’s source code is not “pertinent, required, or useful for examination or evaluation of the analyzer’s accuracy, scientific reliability, [or] forensic validity.” But Vera Eidelman, staff attorney with the ACLU’s Speech, Privacy, and Technology Project, pushed back on that assertion, stating “The evidence is presented as being from an infallible, truth-telling computer. But algorithms are human constructs that are subject to human bias and mistakes, which get in the way of their design and use.”

Intoximeters itself has argued against producing the source code, even under protective order. “Source code is a red herring,” claimed Intoximeters counsel Wilbur Tomlinson after the company refused to comply with an order to produce the code. “Testing the instrument itself is the recognized method for determining whether the instrument has an issue. Any problem with the source code that could produce an erroneous result should be detectable by testing the instrument.”

When presented with seemingly reasonable sounding statements like these, some judges can fail to grasp the nuances of how software can alter outcomes under unique circumstances. Without a good grasp on how technologies and source code interact, judges fail to see the harm in allowing prosecutions to proceed after a company fails to produce the code.

According to the Electronic Frontier Foundation (“EFF”), an organization engaged in legal advocacy over issues in emerging technology, “[a] misplaced less-than (<) symbol in Ireland’s National Integrated Medical Imaging system may potentially have led to thousands of incorrectly recorded MRIs, X-rays, and CT scans that, in turn, may have led to unnecessary medical procedures.”

The EFF has also argued for the importance of disclosing software source code on the grounds that manufacturers have a financial incentive to hide errors rather than fix them, as it was discovered when Volkswagen diesel cars were programmed to intentionally give false pollution readings under the specific circumstances of an emissions test—a blatant attempt to skirt emissions standards.

“The hallmark of good science is transparency, which is not what Intoximeters is doing,” said Charles Ramsay, an attorney representing DUI defendants in Minnesota. “They safeguard their software more tightly than Microsoft. It’s not something they need to do to protect their business. They do it to prevent us from discovering the fatal flaws in the software.”

Judge Andrew Pearson, a Minnesota district court judge, said in a December 2019 order regarding release of breathalyzer source code, “Given that there is no way to retest this sample on a different machine, source code review of the only analysis that will ever be performed on this now-absent breath sample is that much more crucial and proportional.”

Ideally, more judges will be educated about the rise in importance of source code access in forensic tools like breathalyzers, and how, without access to the code, defendants could be deprived of the tools necessary to challenge the reliability and accuracy of these tools.

“This isn’t only happening with breathalyzers,” said Vera Eidelman, staff attorney for the ACLU’s Speech, Privacy, and Technology Project. “There is a problem with junk science in forensics. We’ve seen numerous instances of technologies used in court getting debunked. Public access to information about that technology and the underlying principles has been central in making those challenges successful.”

Probabilistic Genotyping

Using biometric data in forensics has come a long way since it was discovered that fingerprints could be used to link a suspect to a crime scene. Measuring various unique markers about a person’s biology has taken some surprising turns; researchers are even developing a way to identify recent occupants of a room based on the microscopic life we carry with us (and leave behind). However, no other biometric science has come close to garnering the near-absolute faith we put in DNA genotyping.

Thanks to television shows like Law & Order and CSI, most Americans believe that DNA analysis is an exact and infallible method for establishing a defendant’s presence at the scene of a crime—or even establishing guilt. And while genotyping a blood or semen sample from a single donor can be a reliable method for linking a suspect to a crime, even a small error in collecting or processing the sample can undermine the reliability of the results.

Genotyping is a science built on probabilities. Sequencing the entire genome (all of a person’s DNA) of every suspect would be prohibitively expensive and time consuming. As a shortcut, the FBI has identified 13 core locations (known as “STR,” for short tandem repeat markers, or “loci”) on a person’s genome that, when sampled, are likely to enable unique identification. However, different ethnic groups can require different numbers of sample loci to establish uniqueness. Interpol uses ten loci when sampling persons from Great Britain and greater Europe, but only nine loci are generally used when processing samples from persons native to the Indian sub-continent.

According to Nature.com, “[a]ssuming that all 13 STR [loci] follow the principle of independent assortment (and they should, as they are scattered widely across the genome) and the population randomly mates, a statistical calculation based upon the FBI-determined STR allele [gene variant] frequencies reveals that the probability of two unrelated Caucasians having identical STR profiles, or so-called ‘DNA fingerprints,’ is approximately 1 in 575 trillion.”

Contained in that statement are a lot of assumptions, and real life has a way of shattering assumptions. For instance, the phrase “the population randomly mates” means that the probability of identifying a unique profile based on the same 13 locations can drastically change when dealing with an insular cultural or religious community like the Amish or Hasidic Jews.

Further, if a sample obtained from “crime scene evidence is in a very small quantity, poorly preserved, or highly degraded” such that only four genetic locations can be sampled, the probability of uniqueness drops to “roughly 1 in 331,” according to Nature.com.

Understanding how various factors can alter the probability of uniquely matching a person to a DNA sample can go a long way to undermining the mystique of the infallibility of DNA evidence in the courtroom. And things get infinitely more complicated (and less reliable) when the imperfect sample is from more than one person.

This problem is more common than police procedural television shows portray. Humans touch a lot of things in our environment (including other people), which can then come in contact with other things, etc., and we shed skin cells when we do so. Every time contact happens, some skin cells can be transferred from object to object. [See Sept. 2018 CLN cover story on secondary DNA transfer.]

When law enforcement obtains a sample of a few skin cells from a doorknob at a crime scene, those cells are likely to come from more than one person as well as over an undetermined period of time. Determining the likelihood that a random person contributed to that mixed sample is known as probabilistic genotyping.

Processing any DNA sample involves using chemicals to “cut” the DNA into its constituent bits (alleles) and then using an electrical charge to draw the alleles through a gel—a process known as electrophoresis. The length of the resulting band indicates which allele is present, and the darkness of the band shows its strength. This process can be somewhat subjective (i.e., “how dark is that band?”). It gets significantly more subjective when multiple donors contributed to the sample, and the results can be influenced by the analyst’s assumptions.

One analogy is like taking the output from a digital weight scale that had more than one person standing on it and then trying to tell if a specific person was standing on it at the time (assuming a person’s weight does not fluctuate). It would help to know ahead of time how many people were on the scale, or whether any of the persons belonged to populations with unique characteristics (such as children or sumo wrestlers).

Dr. Bruce Budowle, an architect of the FBI’s national DNA database has said, “Five person mixtures can look like three-person; four contributors can look like two-person mixtures. It’s almost impossible to actually be accurate.”

New Genotyping Technology

As computer hardware and software became cheaper and more ubiquitous in the early 2000s, a whole industry developed around using computers to try to remove some of the subjectivity of probabilistic genotyping. But not all of these new forensic tools are well made, and like breathalyzer technology, the software used to run them can make decisions or assumptions that undermine the reliability of the results. Public crime labs in New York, Texas, and Washington have had to temporarily shut down testing or switch tools because of flaws discovered in DNA testing tools being used.

Lab analysts “make it seem like it’s a completely objective process,” said Bicka Barlow, a California attorney with a master’s degree in genetics and molecular biology. “But I’m 100 percent convinced that there are many people who are incarcerated who were convicted with DNA evidence who are innocent.”

“There are probably 5,000 or 6,000 innocent people in Texas prisons alone,” lamented lawyer Mike Ware, executive director of the Innocence Project of Texas. “How many of them could benefit from such a reanalysis of DNA that was used to convict them?”

Lydell Grant was arrested for the December 2010 stabbing death of Aaron Scheerhoorn outside a bar in Houston. Several witnesses identified Grant in a photo line-up (a notoriously unreliable identification method), though Grant had an alibi for that night. The only other evidence presented by the state was a DNA sample collected from under the victim’s fingernails—a mixture containing that of two people, the victim and a second male profile.

According to NBC News, “Houston’s crime lab at the time was unable to conclude that the other genetic material was Grant’s, and the state’s expert’s testimony suggested to the jury that Grant ‘could not be excluded.’” Grant was later convicted and spent almost a decade in prison.

The Innocence Project of Texas partnered with Angie Ambers, an associate professor of forensic science at the University of New Haven in Connecticut. “Years after Lydell Grant was convicted and sent to prison, there was a paradigm shift in how we interpreted DNA mixtures in criminal casework,” said Ambers. “Rather than having a human DNA analyst interpret a mixture of DNA, computer software programs were developed to reduce the subjectivity in interpretation.”

Ambers referred Grant’s team to a company called Cybergenetics, which had created probabilistic genotyping software called TrueAllele. This software was able to separate Scheerhoorn’s DNA from the other male’s and create a profile of the suspect. The suspect profile was not a match to Grant, so the team sought assistance in locating a suspect. A South Carolina crime lab then allowed the team to upload the profile for comparison against the FBI’s Combined DNA Index System or CODIS. The database returned a match to Jermarico Carter, an Atlanta man who was in Houston on the night of the murder. Carter later confessed to the crime, and subsequently, a Texas judge has recommended that the state’s highest court invalidate Grant’s conviction.

While TrueAllele helped exonerate Grant and find a more likely suspect, this software program (and others like it) are subject to the same limitations as any other piece of software—the code can contain errors (intentional or unintentional)—and the results of any particular analysis can hinge on assumptions hidden in the software’s programming.

TrueAllele and STRMix are currently the two main private competitors in the probabilistic genotyping forensic software market. The companies behind both of these products have in recent years resisted defendants’ attempts to access their products’ source code.

The companies claim that access to the source code is unnecessary because the science underpinning probabilistic genotyping is sound in general and because their own in-house researchers have validated the reliability of those products specifically. But this approach seems somewhat akin to taking researchers employed by tobacco companies at their word that cigarettes don’t cause cancer.

It helps to understand that the software creates a statistical model of the data and juggles the various factors to arrive at a probability score—a process very much like election forecasting models used by The New York Times and Nate Silver’s FiveThirtyEight.com. While the math behind the models is essentially the same, the final numbers can differ from publisher to publisher. And like in election forecasting, there is no way to measure whether the probability is accurate.

According to the EFF, “[t]here is no objective truth against which those numbers can be compared. We simply cannot know what the probability that a person contributed to a DNA mixture is. In controlled testing, we know whether a person’s DNA was a part of a DNA mixture or not, but there is no way to figure out whether it was 100 times more likely that the donor’s DNA rather than an unknown person’s contributed to the mixture, or a million times more likely.”

When discussing the results from software like TrueAllele to juries, prosecutors use phrases like: “It is X times more likely that the defendant, rather than a random person, contributed to this DNA mixture sample.” Further, if an analysis done by software like TrueAllele outputs a number like “1 in 1 quintillion,” and a different software outputs “1 in 10,000,” there is no reason to believe the TrueAllele output is more precise, nor is there any way to objectively evaluate the accuracy.

More troubling is that, in an industry where sales of forensic tools rely on impressing prosecutors or law enforcement agents who typically lack degrees in molecular biology or genetics and otherwise lack the ability to independently critically evaluate the claims made by vendors, larger numbers can seem more impressive. This creates a financial incentive for the companies to weight their statistical models in ways that output larger numbers. Without access to the source code, there is no way to determine what choices are being made when processing each sample, and whether the code contains errors.

Such an issue arose in 2015 when authorities in Queensland, Australia issued a notice to defendants that a “miscode” had affected DNA likelihood ratios in 60 cases where samples were processed using a specific version of STRMix in use between July 2014 and January 2015.

“The category referred to above involved three-person mixed DNA profiles, where one of the contributors to the mixed DNA profile was an assumed contributor,” said Greg Shaw, senior director of the Queensland Health Forensic and Scientific Services.

A co-creator of the software, Dr. John Buckleton, responded to the discovery of the error by saying, “When we looked at the circumstances needed to cause this, we thought it was almost impossible. We can’t replicate it. The question would be, have they followed recommended processes? Are they following the manual?”

“It has the potential to be a serious ground of appeal for wrongful conviction if the jury has relied on information that is now demonstrably incorrect,” said Queensland defense attorney Michael Bosscher. “Any significant change would need to be reconsidered by a jury because they have relied on particular evidence, demonstrably false evidence now, to arrive at their conviction.”

Tools in use by the DNA laboratory in the office of New York City’s chief medical examiner (“ME”) also came under intense scrutiny in 2017 and are no longer in use, though the lab has said its adoption of new tools was unrelated to the criticism.

In 2008, the lab began using the Forensic Statistical Tool (“FST”), a software program similar to TrueAllele and STRMix, to analyze DNA samples containing multiple donors. Around the same time, the lab also began using a new technique called “high-sensitivity testing” (“HST”) to analyze extremely small samples by “amplifying” them to a level where more definitive results could be obtained.

According to NYTimes.com, “[b]y its own estimate, the lab has used high-sensitivity DNA testing to analyze evidence samples in 3,450 cases [from 2006 to 2017], and the FST in 1,350 cases [from 2011 to 2017]. Cases in which both methods were used may be counted in both totals.”

The Legal Aid Society of New York teamed up with various experts, including some former employees of the lab, to bring a case regarding the reliability of these tools, which eventually allowed access to the source code.

One FBI expert said, “the FST was designed with the incorrect assumption that every DNA mixture of the same size was missing information or had been contaminated in just the same way. The [tool] didn’t consider that different people in a mixture, especially family members, might share DNA.”

The HST also had serious flaws. It turns out that when you “amplify” very small, degraded samples, the result can be more error than data. This is why the approval process for the tool stated the lab would not process samples smaller than 20 picograms (about 3 cells of DNA). It was later discovered that in at least one 2012 case, the lab processed a sample that was 14.5 picograms, and many more very small samples may have been processed.

The lab had a financial incentive to process samples on the edge of reliability: 50 jurisdictions from as far away as Montana and Texas sent samples to the lab and paid $1,100 per sample for processing.

In 2016, a federal judge ordered the lab to produce the source code for the FST under a protective order. The government initially refused to produce it on the grounds that is was a “proprietary and copyrighted” statistical tool owned by the City of New York.

The code was ultimately released for review by defense expert Nathaniel Adams, a computer scientist and engineer at a private forensics firm in Ohio.

“I did not leave with the impression that FST was developed by an experienced software development team,” said Adams. He continued by saying that, pending more rigorous testing, “the correctness of the behavior of the FST software should be seriously questioned.”

According to a spokesperson for the ME’s office, technology consultants wrote the software to the FST, and that few, if anyone, at the lab had the expertise to double-check the software’s function. “We don’t know what’s going on in that black box, and that is a legitimate question,” said one lab scientist to NYTimes.com under condition of anonymity for fear of workplace retaliation, adding that evidence from older cases should “absolutely” be retested in light of growing questions about the reliability of FST.

The lab published a memo that, as of January 1, 2017, it was replacing both HST and FST with newer tools. According to NYTimes.com, “[t]he lab retired the FST in favor of STRMix, a commercially available and FBI-endorsed software program for DNA mixtures that dozens of public labs use.”

The Lower Courts

Though the U.S. Supreme Court has not provided guidance about how lower courts should balance a defendant’s right under the Confrontation Clause with a company’s right to protect its intellectual property, state appellate courts have occasionally weighed in on the issue.

Around the same time defense attorneys in New York were challenging the flawed tools used by the ME’s office for DNA analysis, attorneys in California were fighting for the right of defendants to obtain the source code for TrueAllele.

Martell Chubbs was living in Long Beach, California, in December 1977 at the same time Long Beach Police discovered the body of a 17-year-old murder victim. Police obtained a vaginal swab from the victim, but the case went cold.

In June 2011, Sorenson Forensics conducted a DNA analysis on the sample that resulted in a three-person donor profile: the victim and two unidentified males.

No information is available on how the police identified Chubbs, but his DNA was compared to the sample. Sorenson’s methodology stated that there was a 1 in 10,000 chance that a random Black person other than Chubbs contributed to the sample. The state had the sample retested by the Cybergenetics lab in Pennsylvania. The TrueAllele software stated there was a 1 in 1.62 quintillion chance that a random Black person other than Chubbs contributed to the sample—a significantly higher probability than that generated by Sorenson.

Because the entire case depended on the reliability of the DNA analysis results from Cybergenetics, the defense successfully obtained a subpoena for TrueAllele’s source code under a protective order. The state refused to comply and requested an appellate court quash the subpoena rather than allow the district court to exclude the DNA evidence.

On appeal, Chubbs argued that the extreme variance in results between Sorenson’s analysis and that produced by the TrueAllele software made source code access absolutely necessary to test the reliability of TrueAllele’s results.

In its ruling quashing the subpoena, the appellate court reiterated the California Supreme Court’s statement in People v. Hammon, 938 P.2d 986 (Cal. 1997): “It is not at all clear whether or to what extent the confrontation or compulsory process clauses of the Sixth Amendment grant pretrial discovery rights to the accused.”

The appellate court completely missed the larger picture in its ruling since the vast majority of cases never make it to trial and are settled by plea agreements during pretrial. A defendant who cannot satisfactorily obtain information that directly speaks to the reliability of the evidence against him cannot make a truly informed decision about whether to proceed to trial. The trial penalty (where defendants who go to trial on average get significantly longer sentences than defendants who plead out) already incentivizes innocent people to take plea deals, but denying a defendant’s confrontation right during pretrial makes going to trial a truly ridiculous gamble.

More recently, in a case of first impression in New Jersey, an appellate court made the opposite ruling. See State v. Pickett, 466 N.J. Super. 270 (2021). Defendant Corey Pickett sought the source code for TrueAllele during pretrial proceedings to conduct a Frye hearing in which the trial judge is called upon to fulfill his or her threshold gatekeeping role in making reliability determinations of whether the science underlying the proposed expert testimony has “gained general acceptance in the particular field in which it belongs.” Frye v. United States, 293 F. 1012 (D.C. Cir. 1923).

Though the trial court initially denied Pickett’s request, the appellate court reversed the denial, announcing: “We hold that if the State chooses to utilize an expert who relies on novel probabilistic genotyping software to render DNA testimony, then defendant is entitled to access, under an appropriate protective order, to the software’s source code and supporting software development and related documentation—including that pertaining to testing, design, bug reporting, change logs, and program requirements—to challenge the reliability of the software and science underlying that expert’s testimony at a Frye hearing, provided defendant first satisfies the burden of demonstrating a particularized need for such discovery.” 

The court noted: “Without access to the source code—the raw materials of the software programming—a defendant’s right to present a complete defense may be substantially compromised.”

In May 2021, the EFF filed an amicus brief in support of Alvin Davis of California. Davis was charged based in part on an analysis by STRMix after traditional methods were not able to produce a match. After the first trial ended in a hung jury, the state retried Davis, obtained a conviction, and sentenced him to life in prison without parole.

The EFF argued that Davis should be allowed to challenge his conviction and be granted access to STRMix’s source code because its analysis was the only substantial evidence presented that linked him to the crime. The amicus brief stated, “No questioning of the designer or vetting of an abstract algorithm can substitute for independent analysis of the code itself or satisfy the constitutional protections that prevent injustice in criminal prosecutions.”

It’s clear that nationwide guidance regarding access to source code is necessary, either in the form of legislation or a Supreme Court ruling. Allowing states and trial judges, who are often unaware of larger issues in technology or forensics, to set the rules governing defendants’ constitutional rights in this area is dangerous and risks further undermining our system of justice.

Cell-Site Simulators

As illogical as it seems to prevent defendants from obtaining the source code of tools used to prosecute them, even more offensive is a conspiracy by law enforcement and prosecutors to entirely deny that special tools were used in an investigation at all.

According to a December 2016 report by the U.S. Congressional Committee on Oversight and Government Reform entitled “Law Enforcement Use of Cell-Site Simulation Technologies: Privacy Concerns and Recommendations,” the FBI restricts the use and sale of cell-site simulators to only local law enforcement agencies, and it requires the agencies to sign a non-disclosure agreement, which prohibits them from even disclosing to courts when the devices are used.

Cell-site simulators (“CSS”), also known as “IMSI Catchers” and “Stingrays,” mimic a cellphone tower, causing nearby cellphones to connect to them. Once a CSS gets a signal from a cellphone of interest to authorities, the device can be used to home-in on the exact location of the cellphone in question. The versions sold to local law enforcement are unable to intercept phone calls, text messages, or data packets from surveilled phones, but this is only a software block. It is suspected that versions used by the DOJ and DHS have no such limitation.

According to the Anaheim Police Department’s Chief Cisneros, the equipment is used “for search and rescue, critical missing people, locating injured or suicidal people who are unable to call for help, kidnap or ransom victims, hostage rescues, mass casualty threats, credible hostile threats against churches and religious groups throughout our community, human trafficking, mass murder fugitives ... serial rapist investigations, etc.”

A CSS was used to locate Ghislaine Maxwell, the accomplice of Jeffrey Epstein, when she was avoiding arrest. Her cellphone was used to call her lawyer, sister, and husband, and the FBI used a CSS to track her phone to the exact location on her 156-acre property.

However, CSSs are used sometimes in mundane criminal cases. In a 2014 robbery case from Baltimore, a Baltimore police detective refused to answer a defense attorney’s questions during a suppression hearing, citing the non-disclosure agreement his agency signed when purchasing the CSS he used.

“You don’t have a nondisclosure agreement with the court,” said Circuit Judge Barry G. Williams when he threated to hold the detective for contempt of court. The prosecution withdrew evidence it obtained after locating the defendant using the CSS and proceeded with the remaining evidence. Officers had also intentionally omitted the use of the CSS in their warrant application to track the suspect.

“[The detective] may have been following orders,” said Hanni Fakhoury, a lawyer with the EFF, “but it’s ridiculous that superiors, whether police or prosecutors, direct officers to evade answering questions about surveillance technology that is now widely known about. It’s even more remarkable the prosecutors simply chose not to use the evidence rather than disclose details about it.... [T]he technology must really be capable of some remarkable things if the government is so desperate to keep it under wraps.”

Courts routinely exclude evidence obtained as a result of using a CSS to locate suspects, though this has thus far been exclusively based on protection provided by the Fourth Amendment to the U.S. Constitution. But when police are willing to conspire to hide from defendants and even courts that a highly sophisticated, yet mysterious, device is being used, this raises serious concerns related to the Confrontation Clause as well. Refusing to disclose a device’s use completely circumvents a defendant’s right to confront and challenge the evidence against him, and it is deeply troubling that prosecutors would intentionally undermine the justice system in this way.

Algorithmic Risk Assessments

In a 2016 case, the Wisconsin Supreme Court upheld the use of an AI-driven tool at sentencing known as COMPAS, which stands for Correctional Offender Management Profiling for Alternate Sanctions. See State v. Loomis, 881 N.W.2d 749 (Wisc. 2016).

Loomis argued that the use of COMPAS violated his due process right in three ways: (1) that is denied him an individualized determination at sentencing, (2) it impermissibly took gender into account, and (3) its proprietary, closed-source nature violated his Confrontation Clause right to discern and challenge the tool’s scientific validity.

At least 10 to 15 years was added to Loomis’ sentence due to his COMPAS score, but the Wisconsin Supreme Court upheld his sentence because the score was advisory, not determinative, since the judge had ultimate discretion to impose the sentence after taking the score into account.

In the same way that the Confrontation Clause has been limited during pretrial evidentiary hearings, courts have limited the right of defendants to challenge the validity of AI-based algorithmic risk assessment tools used at sentencing or to determine whether a defendant is placed on pretrial bond.

According to Anusha Rao, part of the AI Society at Johns Hopkins University, the “biggest selling point [of these AI tools] is that they are objective—those that favor their use tout the impartiality and unbiased nature of mathematical code. While a judge could be affected by emotions and serve harsher punishments, an algorithm would never fall prey to such an inappropriate and human flaw.”

This impression is flawed because of the way AI tools are trained using historical data. When tools are trained with data that reflect over policing in minority communities, the tools then associate minorities with higher offense rates. For example, reliable studies have shown that minorities are no more or less likely to engage in drug use; therefore, criminal history data on drug convictions more accurately reflects racism in policing than it does criminal propensity. Yet these tools have no capacity to apply context or to deconstruct racist patterns in the data they are fed.

The only way to be sure that an AI tool is not merely parroting biased training materials (in computer science jargon, “garbage in, garbage out”) is to enable defendants’ access to the source code and the material used to train the AI. Without access to these, defendants cannot challenge the reliability of these tools and their influence on life-altering decisions.

According to Slate.com, “[p]eople presented with the advice of automated tools are prone to automation bias (through which they defer to the automated system without proper scrutiny), struggle to evaluate the quality of algorithmic advice, often discount accurate algorithmic recommendations, and exhibit racial biases in their response to algorithms.” And, of course, judges are not immune to these problems.

Experts who have run simulations on the COMPAS tool have found that 57% of the variation in the scoring is explained by age alone. See “Algorithmic Risk Assessment and the Double-Edged Sword of Youth,” by Stevenson and Slobogin in Washington University Law Review, Vol. 96, Issue 3 (2018).

Youth is almost always a mitigating factor at sentencing, yet the proprietary nature of the COMPAS tool obscures how youth can significantly raise a defendant’s risk score. This takes a mitigating factor (youth) and repackages it as an aggravating factor (risk).

Defense teams are completely unable to provide countervailing evidence to risk assessment because the assessments are fed the very data used at sentencing. With access to the source code of these tools, attorneys could explain how these tools might be flawed or biased or at least be able to create context for how the score should be weighed. This right of confrontation, however, is almost completely denied at sentencing and during bond hearings.

Conclusion

The legal battles around breathalyzers and probabilistic genotyping software highlight how important it is for defendants to have access to the source code for software-driven forensic tools. When the software is constructed from thousands of lines of code, and typos can result in unpredictable errors, mere access to design documents and validation studies is insufficient to meet the Constitution’s demand that defendants be allowed to challenge the reliability of these tools.

Further, state by state approaches are insufficient. In a legal system where precedence matters, enough courts are getting this issue wrong to have grave concerns about our future. As the 21st century marches on, we will continue to see more forensic tools driven by software used in criminal investigations. This is why a nationwide solution—either by federal statute or Supreme Court ruling—is the only acceptable method to ensure all citizens have access to materials necessary to ensure justice.

Failing to make source code access mandatory—at least under a protective order—turns these tools into unimpeachable black-boxes whose reliability cannot be realistically challenged, and juries will accept their results without question.

The Supreme Court remarked that “[d]ispensing with confrontation because testimony is obviously reliable is akin to dispensing with jury trial because a defendant is obviously guilty.” Crawford v. Washington, 541 U.S. 36 (2004).

The right to confrontation should not be limited to the confines of the trial proper; the inner workings of any forensic tool must be available even during pretrial and sentencing hearings. The power dynamic in our current criminal justice system is already heavily weighted towards prosecutors, and nearly all cases are disposed of by plea agreement.

The ability to challenge evidence—including during pretrial and sentencing—better ensures real access to justice and provides more than just a sham appearance of due process.

We must remember that our system of laws, which defines various rights, is our own creation. There is nothing inherent in nature about a company’s intellectual property right or a defendant’s right to confrontation. These are choices we have made that reflect our priorities and our collective wisdom as a society.

Intellectual property is valuable and deserves protection, but it should not be weighted more heavily than the freedom of innocent men, women, and children. 

 

Additional sources: abajournal.com; zdnet.org; wcvd.com; apps.des.wa.gov; digitalcommons.law.ou.edu; tsdr.uspto.gov; baltimoresun.com; washingtonpost.com; publicintelligence.net; cehrp.org; gizmodo.com; openscholarship.wustl.edu; nature.com; eff.org; thedailybeast.com; kstp.com; wpbf.com; techcrunch.com; arstechnica.com; timesheraldonline.com; slate.com; latimes.com; jhunewsletter.com; nbcnews.com; nytimes.com; couriermail.com.au; crowelltradesecretstrends.com; inputmag.com

As a digital subscriber to Criminal Legal News, you can access full text and downloads for this and other premium content.

Subscribe today

Already a subscriber? Login

 

 

The Habeas Citebook Ineffective Counsel Side
Advertise here
The Habeas Citebook: Prosecutorial Misconduct Side