Skip navigation
CLN bookstore
× You have 2 more free articles available this month. Subscribe today.

Academic Paper Highlights Need to Tighten Rules for Fingerprint Evidence in Light of False-Positive Error Rate

by Steve Horn

A new study published in the UCLA Law Review reveals a potential for rule tightening on the use of fingerprint evidence in the U.S. judiciary.

The Reliable Application of Fingerprint Evidence,” written by University of Virginia School of Law professor Brandon Garrett, focuses on the State v. McPhaul decision in the North Carolina Court of Appeals in November 2017.

The defendant, Juan McPhaul, faced charges of attempted first-degree murder, assault, and robbery with a dangerous weapon stemming from a Domino’s Pizza delivery in North Carolina. In retracing McPhaul’s steps, law enforcement pulled fingerprint data from pizza and chicken wing boxes and other items seized from the address associated with his order. The prints were cited as evidence in the successful prosecution of McPhaul in trial court.

However, upon appeal, the reliability of that fingerprint evidence was questioned by McPhaul’s defense team. They pointed to inadequate cross-examination responses by the prosecution’s fingerprinting expert about the methodology used to confirm the fingerprints’ accuracy.

While fingerprint evidence is often viewed as nearly infallible, the science backing that assessment is lacking. As a result, the National Academy of Sciences in 2009 and the Presidential Council of Advisors on Science and Technology under President Obama in 2016 concluded that statements pertaining to fingerprint findings should not simply be referred to as “matches” by expert witnesses called before a court. Instead, conclusions about fingerprinting tests should reflect new findings about their sometimes less-than-foolproof accuracy, including a  2011 Federal Bureau of Investigation (FBI) lab study and a 2014 lab study conducted by the Miami-Dade Police Department.

As Garrett noted in his paper, the FBI study said the “false-positive error rate ‘could be as high as 1 error in 306 cases’” for fingerprint matches, while the Miami-Dade study found a false-positive error rate of 1 in 18.

While McPhaul’s defense did not point to these studies or question the credibility of fingerprint examinations, the Appeals Court judge concurred with the defense argument about the inadequate explanation for fingerprinting examination methodology used by the police department’s expert during cross-examination. Yet, that alone, when coupled with other evidence against McPhaul, was not enough to overturn the conviction.

It may, however, signal a move toward more transparency in the scientific methodology behind the fingerprint testing for prosecutors, Garrett concluded.

“If a technique is a black box, the expert may not be able to say much about how conclusions were reached, except to state that they were reached based on experience and judgment. What should a judge do then?” Garrett asked.

Garrett wrote that “a judge should require that an expert disclose full documentation of every step in their examination process and qualify their conclusions.”

He also said the McPhaul precedent could spark a push toward applying the Federal Rules of Evidence and its Rule 702, which governs “Testimony by Expert Witnesses” and calls for “reliable principles and methods,” which are “reliably applied,” to the facts of a case. Rule 702 was updated in 2000 by applying the precedent set in the Daubert v. Merrell Dow Pharmaceuticals, Inc. U.S. Supreme Court case decided in 1993, creating a five-prong test governing the reliability of expert witnesses and the science they use to make their case in court.

In an interview, Garrett told Criminal Legal News that the State v. McPhaul case could foreshadow a more stringent application of Rule 702 and Daubert as it applies to fingerprint examinations in the U.S. justice system.

“The McPhaul decision represents a new judicial focus on what goes on inside the black box,” wrote Garrett in closing his paper. “For that reason, the ruling should send an important signal to practicing lawyers, judges, and forensic practitioners that the reliable application of principles and methods to the facts matters.”

Steve Horn (SH): How far would you say the McPhaul Appeals Court decision advances this idea of scientific method in the judicial sphere, and do you think it will have precedential value in that state or other states moving forward?

Brandon Garrett (BG): The McPhaul ruling does not engage with larger questions regarding the validity and reliability of many types of forensic evidence. It is narrower. It just focuses on whether the expert can explain the work in a particular case—reliability as applied. That “as applied” issue, however, is an important one in most jurisdictions, which use the modern Rule 702, which requires that judges look to the reliable application of methods. In many disciplines, experts cannot explain why their work was reliable except to say that their experience and judgments tell them to be confident in the answer they reached. Judges should inquire more carefully into how accurate their confidence is.

SH: Fingerprinting and the lack of a coherent scientific method conveyed by the state in this case vis-a-vis its expert witness does not end up getting the jury verdict reversed for McPhaul. You write, therefore, this could limit the precedential value of the case going forward. Is that true and if so, why?

BG: In this particular case, the appellate court concluded that any error was harmless because there was other evidence of guilt in the case. While it is possible that the North Carolina Supreme Court could look at the case, there is less reason to do so when the outcome affirms the conviction.

SH: Given the fail rates you present about fingerprinting in the article in terms of finding a match and fairly often finding the wrong match, what are best-practice methods those in the field use to ensure their fingerprints are an actual legitimate match? And what are ways something called a “match” might not actually be one, but can be said to be one by an expert witness by perhaps fudging or misrepresenting the data?

Are there any prominent examples of an expert witness fudging this sort of data on behalf of an overzealous prosecutor or simply just not being ethical in his/her work in order to sell a prosecutor something he/she wants to hear for his/her case? Do these sorts of experts make a lot of money doing what they do?

BG: There are high-profile examples of wrongful convictions caused by errors in fingerprinting, as well as other pattern-matching forensics. Most of the DNA exonerees, for example, were freed by modern DNA testing, but originally convicted due to errors in forensic science. We know that errors happen using any human technique. Experts are not immune from error, but they may be more confident that they get it right because of their training and experience.

One way to protect against experts getting it wrong is to have an independent examiner review the evidence “blind.” The FBI now does that in some cases, in response to the high-profile error in the Brandon Mayfield case. A more systematic way to protect against error is to routinely test experts blind, using proficiency testing, so that they never know when it is a test, and so that we know how often they make errors in their everyday work. Most errors may be false negatives, or failure to include someone, and those errors are incredibly important, too—they may mean that evidence that could be used to convict a guilty person is lost.

SH: Can you explain the term ipse dixit used in your paper and how it applies to expert witnesses and in the case of your paper, fingerprinting expert witnesses? I ask that in terms of how it relates to the current methodological explanations they give when coming under cross-examination in criminal court trials. Am I right to characterize it as something akin to “Trust Us, We're Experts!” to quote the title of the book by John Stauber and Sheldon Rampton?

BG: Ipse dixit is an unproven statement. In the context of forensics, it is the type of conclusion that is based on the expert’s opinion and experience. Pattern-matching forensics are “black box” methods, based on the expert’s own judgment. The expert can only say that she is confident that she is right. If we had real proficiency testing programs, then experts could say much more – they can say how often they get it right. We could then have informed confidence in their confidence.

SH: Your paper was obviously about fingerprinting experts and how it relates to turning how they go about doing what they do into less of a “black box.” But are there other key issues that also involve experts called upon by prosecutors relying on ipse dixit-type explanations to back up their work when coming under cross-examination in criminal court cases? If so, what issues are they and are there any other prominent court cases that arose in those cases akin to that of State v. McPhaul?

BG: Many types of forensics rely on the opinions of experts. It is important to investigate the assumptions underlying those opinions. Unfortunately, in some areas, like in bite-mark matching, scientists have concluded that the opinions of experts are so unreliable that they should never be used in court. Nevertheless, judges continue to let the evidence in. If we cannot ensure minimal reliability in the most serious criminal investigations, then we need to seriously confront the assumptions underlying our system of justice.

SH: What do you hope to be the biggest takeaways for those who read your paper?

BG: Even fingerprint testimony, long a gold standard for forensics, relies on assumptions about its reliability that have not been adequately studied. Experts reach conclusions based on their own personal opinions. We need to do better in criminal justice. Fortunately, serious research is now being done to study error rates and provide a more quantitative basis for many forensic techniques. Until that research reaches the courtroom, however, judges should be far more cautious, and so should we all.

Editor’s Note: Brandon Garrett is the author of the books Too Big to Jail: How Prosecutors Compromise with CorporationsConvicting the Innocent: Where Criminal Prosecutions Go Wrong, and End of Its Rope: How Killing the Death Penalty Can Revive Criminal JusticeHe will soon join the faculty at Duke Law School, where his wife Kerry Abrams was recently named dean.



As a digital subscriber to Criminal Legal News, you can access full text and downloads for this and other premium content.

Subscribe today

Already a subscriber? Login



The Habeas Citebook: Prosecutorial Misconduct Side
CLN Subscribe Now Ad
The Habeas Citebook Ineffective Counsel Side