Skip navigation
The Habeas Citebook: Prosecutorial Misconduct - Header
× You have 2 more free articles available this month. Subscribe today.

The Debunking of Forensic Science: A Decade of Increased Scrutiny Reveals Forensic Processes Prone to Bias and Error

by Casey J. Bastian

When a crime is committed, it is vital that the actual perpetrator be identified and held accountable. However, that process is not always straightforward. Every criminal investigation begins with the analysis of a crime scene and the collection of forensic evidence. But crime is inherently complex. Crime can take place anywhere, including the workplace, home, a place of business, motor vehicles, on the streets, and with increasing frequency, on the internet. Crime is committed at all hours of the day and in every region of the country, whether urban, suburban, or rural. A crime scene is often full of information (what’s referred to as “evidence”) that can identify the manner of the crime as well as the perpetrator.

Evidence can be audio, blood, clothing fibers, digital and/or photographic images, fingerprints, footprints, handwriting, saliva, skin cells, tire prints, or the residual effects and debris of arson, gunshots, and unlawful entry. This evidence must be gathered by crime scene investigators, each with varying degrees of training, experience, and ability. The evidence must be searched for, collected, preserved, secured in tamper-evident packaging, labeled, and sent to an appropriate laboratory for evaluation by forensic experts.

Once the evidence is evaluated by experts, it is then presented as either inculpatory or exculpatory in relation to a suspect. Forensics is a “complex interaction between science and the law.” This process plays an integral role in modern court proceedings and the administration of justice. Forensic evidence analysis can be a powerful tool. It can be used to convict the guilty and to avoid convicting the innocent. Unfortunately, forensic examiners and experts are human beings subject to a variety of biases that can, and often do, lead inevitably to errors and wrongful convictions. This is the main problem with forensic science. Its current role in our system of justice appears to exceed its capacity for objectivity and reliability.

Overview of Issues

A revelation that biases and error are found in many forensic processes does not necessarily reflect deficiencies in ethics or competency. While there are cases in which forensic science errors were linked to examiners who were careless, undertrained, or have even deliberately misrepresented or fabricated evidence, this is not the normal reason errors are introduced. As examiners are human, “bias can affect even the most honest, competent, and experienced among them.” These errors and biases are typically the result of subconscious cognitive patterns.

What really must be reviewed and protected are the processes through which evidence is evaluated. When there is a failure to acknowledge the impact bias has on honest evidentiary analysis, systemic problems emerge, and that is what allows them to negatively affect the evidentiary evaluations. This is also when miscarriages of justice occur. The real questions are: Where does forensic bias come from, and how can we minimize it?

Forensic sciences have come under intense scrutiny recently “as an alarming number of forensic science errors have been discovered in wrongful conviction cases.” The National Registry of Exonerations discovered that “false or misleading forensic science evidence has contributed to the wrongful convictions of over 460 individuals in the United States as of January 2017.” According to The Innocence Project, in the United States there were 375 post-conviction DNA exonerations between 1989 and the present. The misapplication of forensic science contributed to 52% of all Innocence Project cases, and false or misleading forensic evidence led to 24% of all cases nationally. Scholars and researchers are highly alarmed by these figures and have been motivated to understand the potential sources of such errors and how they might be prevented.

An article published by The Innocence Project in April 2019 entitled “Cognitive bias research in forensic science: A systematic review” has become Forensic Science International’s most downloaded article. Prior to recent studies, assessments of forensic science have focused only on the underlying science and data. These prior assessments failed to address the processes by which the evidence is evaluated and interpreted by forensic experts. Observations of data and subsequent interpretations of forensic evidence are “mediated by human and cognitive factors.”

What Is Forensic Science?

The term “forensic science” comprises a broad range of forensic disciplines. The National Institute of Justice categorizes forensic science into 12 broad categories: (1) general toxicology, (2) firearms/tool marks, (3) questioned documents, (4) trace evidence, (5) controlled substances, (6) biological/serology screening (including DNA analysis), (7) fire debris/arson analysis, (8) impression evidence, (9) blood pattern analysis, (10) crime scene investigation, (11) medicolegal death investigations, and (12) digital evidence.

Within forensic science disciplines, there is extensive variability in technique, methodology, reliability, level of error, research, general acceptability, and published material. Some of these processes are based in the laboratory, others involve interpretation of observed patterns, and some require the analytical skill and expertise of trained scientists, as well as law enforcement, and knowledge of medicine, laboratory methods, and techniques. Forensic science is a term that encompasses a broad spectrum of activities, some of which lack a “well-developed research base, are not informed by scientific knowledge, or are not developed within the culture of science.”

The Duty of Forensic Experts and Impact on Judicial Process

Forensic science experts are expected to engage in a variety of tasks, but a primary duty is often “pattern analysis.” This typically involves comparing two patterns or samples and then making an expert determination as to whether those samples “match.” This is usually done by taking a pattern or sample of unknown origin and comparing it to one of a known suspect — if it appears that they originate from the same source, they match. An example of this is when an examiner compares fingerprints taken from a crime scene to those of a suspect, or when handwriting on a ransom note is compared to a sample of a suspect’s handwriting. If there is a match, it serves to inculpate the suspect.

A primary issue highlighting the real-world consequences of continuing to believe in debunked forensic science processes is what’s referred to as the “CSI effect.” The average person on a jury will often “place an over-valued amount of credulity on evidence based on forensic methods.” Jurors rely heavily on forensic evidence in their decision-making process, without knowing that the evidence often is not as reliable as it appears or as told to them it is by an expert witness. Research has revealed that we often assume our thought processes are based on logic and facts. The reality is that all of us can be influenced by prejudice or bias, often subconsciously. This is especially true when evaluating disenfranchised, marginalized, or stigmatized groups.

A significant number of policies and procedures that exist in our current criminal justice system are intended to facilitate objective and just outcomes. This includes standards for evidence, constitutional rights, and other features designed to prevent bias and the resulting injustice. Despite these safeguards, subconscious bias has dramatically impacted the forensic sciences. The more our criminal justice system has come to rely on these disciplines in the quest for the truth, the more often their inherent shortcomings have harmed innocent people. We have long encouraged the objective analysis of evidence rather than hunches, suspicions, and faulty reasoning, but with each wrongful conviction and exoneration, we are becoming aware that there is a long way to go.

Ten years ago, the National Academy of Sciences (“NAS”) found that, while making an exception for DNA analysis, “no forensic method has been rigorously shown to have the capacity to consistently, and with a high degree of certainty, demonstrate a connection between the evidence and a specific individual or source.” That’s a stunning admission. Yet we continue to use these methods without informing decision-makers or triers of fact (i.e., jurors or judges) that the evidence they are relying on might not be as reliable as they are led to believe. The NAS also found that this “too often results in wrongful convictions.”

Discussion on Research
into Cognitive Bias

Cognitive bias is a systematic error in thinking that leads a person to misinterpret information from the surrounding world that affects the rationality and accuracy of decisions and judgments. Signs of being influenced by cognitive bias include: (1) paying attention only to news articles that confirm your beliefs, (2) assigning blame to external factors when things don’t go your way, (3) assuming everyone shares your beliefs and opinions, and (4) learning a little bit about a subject and then believing you’re an expert on it.

Cognitive bias can be caused by many different things, but the primary contributing factors are mental shortcuts known as heuristics. The world is complex and contains an overwhelming amount of information. If people had to think about all possible options when making a decision, making even the simplest low-stakes decision would take an enormous amount of time. Thus, it’s often necessary to rely on various mental shortcuts that allow for quick decisions. Although these mental shortcuts are often very accurate, they also lead to errors in thinking. Basically, cognitive bias is the mind’s way of attempting to simplify the incredibly complex world in which we live and to make sense of it. 

Other causes of cognitive bias include: (1) social pressures, (2) emotions, (3) individual motivations, and (4) limited ability of the mind to process information. 

The academic research into cognitive bias in forensic science has proven “any technique or process that includes subjective assessment and comparison is potentially susceptible to bias.” This has been shown concerning nearly every forensic science field, including: (1) bullet comparisons, (2) fingerprint, hair, and handwriting comparisons, (3) fire investigation, (4) forensic anthropology and odontology; and (5) even DNA mixture interpretation — the one method the NAS has seemingly declared to be rigorously consistent and certain.

Kerry Robinson was finally exonerated after 18 years in prison for a rape he did not commit when DNA mixture test results were re-analyzed. In 1993, three teenagers broke into the home of a 42-year-old woman, and she was raped. The woman identified two individuals from a local high school yearbook. One of those teens was Tyrone White, and the other was Derrick Smith. She did not identify Robinson.

The trial was delayed until 2002, and two pieces of evidence were used to convict Robinson — the incentivized accusation of White and “inaccurate and overstated testimony from a Georgia Bureau of Investigation DNA analyst.” That analyst initially stated he could not exclude Robinson as a source of the DNA mixture recovered from the victim. However, his testimony changed from “couldn’t exclude” to a “match.” The analyst was wrong. The true results indicated that it was 1,800 more times likely that a random Black person contributed the DNA than Robinson. Biological Sciences Professor Greg Hampikian had the DNA tested by 17 experts, and only one agreed it was Robinson’s. Hampikian said, “The problem is that labs don’t know when to say, this is really inconclusive.” He noted that experts may try to force a conclusion when there simply isn’t one.

As humans are the primary “instrument” of expert analysis in forensic disciplines, their conclusions are inherently subjective. When cognitive processes play such an integral role in one’s work, a risk of cognitive bias is created when contextual information surrounding a case is observed. For example, crime scene personnel might both collect the evidence on-scene and conduct the forensic work back at the laboratory. Examiners can be exposed to irrelevant contextual information, and this can influence the “analysis, evaluations, interpretations, and conclusions at the forensic laboratory.” What information is “task-relevant” isn’t always clear. Cognitive biases are frequently exacerbated by this unnecessary information. In 2009, the National Research Council (“NRC”) identified the concern with this issue. The NRC wrote that “forensic science experts are vulnerable to cognitive and contextual bias” that “renders experts vulnerable to making erroneous identifications.”

There are a number of identified categories of cognitive bias, and the terminology varies across literature. Many effects are similar, allowing for the accumulating effects known as “bias snowball” or “bias cascade.” The bias snowball effect is when a piece of evidence influences another piece, “then greater distortive power is created because more evidence is affected (and affecting) other lines of evidence, causing bias with greater momentum,” resulting in a “snowball of bias.” The bias cascade effect occurs “when bias arises as a result of irrelevant information cascading from one stage to another.” The bias snowball is slightly different than the bias cascade effect. A result of the bias snowball is that bias doesn’t just cascade from one stage of the analysis to another, but “bias increases as irrelevant information from a variety of sources is integrated and influences each other.”

The two most prominent forms of bias are “contextual bias” and “confirmation bias.” The leader in the fight against these negative outcomes in the forensic sciences is Itiel Dror. An unassuming researcher at University College London, Dror has spent decades exposing cognitive biases in the forensic sciences. Dror and his colleagues have found bias in nearly every field that has a degree of subjective analysis of ambiguous crime scene evidence. “In the span of a decade, cognitive bias went from being almost totally unheard of in forensics to common knowledge in the lab,” wrote Brandon Garrett, a professor at Duke University of Law, in his book titled Autopsy of a Crime Lab: Exposing Flaws in Forensics. Garrett also wrote, “We can especially thank Itiel Dror for helping bring about the sea change.”

Identified Types and
Sources of Bias and Errors

To truly understand this problem, it is important to first know both the sources of bias as well as the types of errors and biases. There are a total of eight identified sources of bias that can impact observations and conclusions. According to the Journal of Forensic Sciences (“JFS”), these sources are organized in a three-category taxonomy. The JFS provides an illustration that has the top source, “Category A,” at the peak of a pyramid. This involves “sources relating to the specific case at hand.” The second source, or “Category B,” involves sources “relating to the specific person conducting the analysis.” At the bottom, “Category C” includes “sources that relate to human nature.” A JFS illustration depicts that the sources are listed top down as: (1) data, (2) reference materials, (3) contextual information, (4) base rate, (5) organizational factors, (6) education and training, (7) personal factors, and (8) human and cognitive factors and the brain.

There are three basic types of errors identified in this effort to understand cognitive bias. The first type of error is “ethics violation,” which includes fabricated evidence, intentionally erroneous results, or covering-up of mistakes. The second type of errors are “honest errors.” This includes a “lack of training and mentoring, feeling pressure to complete work or being overwhelmed with work, [and] administrative errors or complacency in one’s work.” The third type of error is “biased oversight” and involves experts’ unwillingness to admit to the influences of bias in their work. There are dozens of identified types of biases, and some of the more frequently observed ones are discussed here.

When the expectation of what the expert believes will be found affects what is actually found, it is known as “expectation bias.” It is also called “experimenter’s bias.” As an example, if an expert were to minimize or disregard the importance of findings that do not comport with initial expectations, while simultaneously validating erroneous evidence that does support their expectations, that is expectation bias. Expectation bias is closely related to “observer expectancy effects.” This is when a researcher subconsciously engages in the manipulation of an experiment or data interpretation to achieve results in line with prior expectations. It is also similar to confirmation bias.

“Anchoring effects,” or “focalism,” are also similar to expectation and confirmation bias. This occurs when a person places excessive reliance on the initial information when making later judgments. The best example is when investigators identify a person of interest, and after this identification, the evidence and circumstances are explained only to implicate that person. Investigators then ignore better, alternative explanations of what could have happened or who else might be the perpetrator.

The “reconstructive effect” is related to the process of memory recall. If an investigator chooses to rely only on memory rather than contemporaneous notes, gaps in the memory can be filled with what they believe “should have happened” rather than the facts.

Then there are “role effects.” This arises when an investigator or expert identifies with either the defense or prosecution teams within our adversarial system of justice. This leads to the introduction of subconscious biases that are particularly influential within the decision-making process, especially when ambiguities exist. If an examiner is looking at trace evidence and no match is found between the suspect and that crime, an examiner hired by the prosecution might claim the findings are neutral, as opposed to stating that their conclusion likely excludes the suspect. The reverse is often seen as well.

Role effects are slightly different from “motivational bias.” Role effects confirm the subconscious relationship to a particular side of the process. Motivation bias occurs when the evidence or information that does support a desired outcome receives less scrutiny than information supporting a less desired outcome. In a law enforcement context, motivational bias might cause an investigator to “find” evidence of guilt for a particular suspect when that investigator is “positive” of the suspect’s guilt. This form of bias, and the misconduct it represents, is well documented. “Target-driven bias” is another similar category. This occurs when an investigator identifies a suspect and then subconsciously works backwards to the evidence, fitting the evidence to the suspect. Critics of the forensic process claim it’s like “shooting an arrow at a target and drawing a bullseye around where it hits.”

There is also “in-group bias.” Humans both highly value, and have an inherent preference for, those who are similar. Forensic experts demonstrating in-group bias exhibit biases when perceiving their product, and the product of those they work closely with, as being more accurate. It allows for deficiencies in the work processes of peers to be easily disregarded, overlooked, or ignored.

When people become accustomed to a particular result occurring at a certain rate and expect this rate to continue, this is called “expected frequency bias.” This bias leads to errors because past experiences cause people to develop familiar future expectations instead of focusing on actual evidence. This can lead to the expectation of outcomes before the evidence is even analyzed.

The Lead-in on Contextual Bias

The mental process of knowing — which includes awareness, perception, reasoning, and judgment — is referred to as “cognition.” These processes are distinct from emotion and volition. Cognitive bias is often defined as a “pattern of deviation in judgement whereby inferences” concerning people or situations “may be drawn in an illogical fashion.” Making judgments is a “natural element of the human psyche” frequently displayed by people in everyday life. Many behaviors are recognized as cognitive processes, considered “mental shortcuts” intended to speed up decision-making, including: being influenced by the beliefs of others, especially peers; jumping to conclusions; only observing what is expected or desired; and “tunnel vision.”

However commonplace or inherent in daily human interactions these cognitive processes might be, these processes must be guarded against in the forensic sciences “where many processes require subjective evaluations and interpretations.” Forensic science processes contain a subjective assessment and comparison stages where subconscious personal bias, known as “cognitive contamination,” can undermine the impartiality and objectivity of the forensic process.

Multiple studies over the last several decades have frequently revealed that context plays a significant role in human judgment. It is important to understand the psychological background in relation to the contextual bias in forensic processes. Generally, information is processed “bottom-up” or “top-down.” Bottom-up processing is the “data-driven processing of information that reaches our cognitive system through our senses.”

Often, the brain finds it too demanding to fully process information when there is a lot of it to take in. This causes the brain to utilize cognitive mechanisms, including selective attention, chunking, and automaticity in an effort to “effectively handle the large amounts of incoming bottom-up information.” These mechanisms draw from prior “experience, knowledge, and expectation.” Each influences how the brain processes incoming information. As we acquire knowledge and experience, irrelevant information is disregarded, and the processing becomes “top-down.” Top-down processing is “conceptually driven” and is integral in most cognitive operations. Cognitive experts believe that this form of processing “stands at the core of human intelligence and expertise.”

The perfecting of our top-down processing abilities allows cognitive mechanisms to become more powerful. When processing information in a particular discipline, this refinement allows for us to become “experts.” In essence, improved performance equals superior abilities. Experts attain abilities that allow high-level performance, more so than non-experts. For experts to achieve high cognitive performance, they need “well-organized knowledge;” “sophisticated and specific mental representations;” and an ability to process “very large or very small amounts of information,” while dealing with “ambiguous information and many other challenging tasks.”

Though expertise and top-down cognition generally support superior performance, they can also degrade performance. This arises from certain vulnerabilities and weaknesses and can cause errors. This includes bias, limited control, overlooking of important information, restricted flexibility, and tunnel vision. As one researcher observed, paradoxically, “the very underpinning of expertise can also result in degradation of performance in specific cases.”

Such issues are inherent cognitive side-effects of expertise, presenting professional challenges in high-skill disciplines, including the forensic sciences. Intent is normally absent in these cognitive side-effects, and this causes many experts to fail to recognize their occurrence. This is a very important observation. Researchers of bias and error note that not “understanding the underlying cognitive mechanisms involved in these phenomena often results in ineffective ways to counter-measure their effects.” The need for awareness stems from the idea that contextual biases are viewed as an ethical concern.

The problem of cognitive bias can’t be solved simply by implementing more ethical codes or training. The idea that it is simply an ethical problem is not just an unfair characterization of most forensic experts, but it is ineffective in dealing with the actual problem. The suggestion becomes one of implied intentionality, something to be overcome by “mere willpower.” This is a misunderstanding of the cognitive processes that do result in contextual bias.

Contextual information can take several different forms. If a pathologist is first told that a body was discovered in an illegal grave and then asked to determine if the manner of death was homicide, that context can cause bias. After all, more people who have been murdered are found in illegal graves as opposed to those who have died of natural causes. However, this is likely “task-irrelevant” information.

Imagine that a firearms examiner is asked whether two spent cartridges came from the suspect’s gun; the examiner is told that the suspect’s gun is missing two cartridges. Again, that information is irrelevant to the task at hand because the fact that the suspect’s gun is missing two cartridges is completely irrelevant to the determination of whether the spent cartridges are from the suspect’s gun. But such irrelevant contextual information can easily affect the examiner’s conclusion. While the contextual information in these examples is not needed by the expert to reach a conclusion, it can be important to the trier of fact. Yet it is vital that the expert’s judgment is based only on relevant information.

One researcher noted that by considering contextual information, experts may well reach correct conclusions, but this can still undermine the ability of the trier of fact to determine the truth of the matter and reduce the likelihood of a just outcome in the legal process. This is called the “criminalist paradox”: “[b]y helping themselves be ‘right’ such analysts make it more likely that the justice system will go wrong. By trying to give the ‘right’ answer, they prevent themselves from providing the best evidence.”

The classification of contextual information can be categorized into four different levels. These classifications are instructive because the way in which the potential biasing effect of contextual information might be minimized varies by each level. Understanding this can “serve as the basis for the development of appropriate standards” for dealing with contextual information in forensic analyses. The levels are based on the “distance” between the contextual information and the material evidence being considered. The lowest level denotes a closer, more intimate connection to the “trace” evidence at hand.

The first level relates to “contextual information that is inherent to the examined trace and cannot easily be separated from it.” For example, a forensic handwriting expert is asked to analyze the handwriting of a threatening letter to determine authorship. Only the handwriting itself should be considered, not the ink, paper, stamps, or sometimes even the content of the letter, though it is part of the “trace” evidence that is the letter itself. The expert in this scenario could be biased by the context derived from their interpretation of the meaning of the words as well. While all of these pieces of the trace evidence may be important to the investigator or trier of fact, it is not necessarily needed by the expert doing the pattern analysis. This can be managed by not providing the entire original trace evidence to the examiner. An effort could be made to remove the meaning of the writing. This facilitates the removal of irrelevant contextual information, allowing for only the comparison of the pertinent questioned material — the handwriting — and the reference material.

The second level is related to reference materials. When crime scene trace is simultaneously analyzed with reference to material of a suspected source (e.g., the suspect, a gun, DNA, etc.), the “perception and choice of the features of the questioned material may become partly dependent on what the expert has seen in the reference material.” The question becomes whether or not the reference material is truly connected to the case. While the comparison requires analysis of both the questioned material and the reference material, the “perception of the features of the questioned item should not.” When analyzing the trace evidence, the reference material becomes contextual information. For example, if during a fingerprint analysis an examiner was allowed to re-analyze the trace fingerprint after analyzing the reference (suspect) print, a risk that the expert will “see” features not actually there, or had not seen prior, might develop. As “[r]eference material can have a strong biasing effect on the perception of the features of the questioned material,” the expert should not receive such material “before or during the analysis of the questioned material.”

Case information in its broadest sense is encompassed within the third level. For example, in the case of an analysis of a suicide note, the question is whether the note was written by the decedent or the twin brother. If the police send the entire case file to the examiner, the fact that the twin lived in the same house, was previously convicted of two violent offenses, and was in financial trouble — for which the decedent’s portion of a recent inheritance would alleviate — is relevant to the investigator and is part of the case information. But such information is completely irrelevant to the handwriting expert in the performance of his analysis. One suggested solution is again to avoid exposing the expert to irrelevant information. It can be difficult to distinguish between the two at times, but this is generally straightforward. The need to avoid both too much, and too little, relevant information should be emphasized.

Level four is comprised of “base-rate” information. This is “organization and discipline-specific information” and can create expectations of an outcome before any examination. Policy investigations by their very nature compile evidence that is incriminating prior to forensic evaluation; that is to say, there “is a high base rate of inculpation.” There is a possibility that an expert receiving irrelevant information might begin unconsciously looking for outcomes that support the investigator’s theory. Such base rate information is present in every investigation, and can have a biasing effect on the probative value of the examination itself. Some researchers suggest adding fake case work to the case file in an effort to mitigate the effect of base-rate information. Others have suggested that this approach is impractical, as only a small amount of information can be added without impacting efficiency. However, the “psychological effect” resulting from a knowledge that fake cases may be present is likely more impactful, and this might be an effective mitigation tactic as a consequence.

Concerns of efficiency and backlog in the processing of forensic evidence is a real concern. According to the U.S. Bureau of Justice Statistics, a 2005 census revealed that the average publicly funded laboratory ended the year with a “backlog of about 401 requests for services … another 4,328 such requests, and [completion of] 3,980 of them.” Most of these are related to controlled substances. The rate of backlog has risen every year since the first 2002 census. The Department of Justice defines a case as backlogged if more than 30 days elapses without the laboratory providing an analysis report.

The 2005 census noted that across local, state, and federal laboratories there was a combined backlog of 435,879 requests for forensic analysis. The backlog, and the associated errors that arise from processing done too quickly, is only getting worse as more requests for “quick” analyses are made by law enforcement. The NRC identified the main issue causing these backlogs is that laboratories are “under resourced and understaffed.” This makes it difficult for laboratories to effectively “inform investigations,” “provide strong evidence for prosecutions,” and “avoid errors that could lead to imperfect justice.” These difficulties are often caused by mishandled and improperly analyzed evidence.

As a means to counter the effects of contextual bias, a number of options have been suggested. These include blind verification processes, evidence line-ups, blind testing, laboratories operating independently of law enforcement, double blind proficiency tests, and competitive self-regulation. Researchers believe that these suggestions “should apply to all forensic analyses that may be used in litigation.” Several stories of wrongful convictions support that position.

In 2004, there was a bombing on a train in Madrid, Spain. The Spanish police gave digital images of partial fingerprints collected from bags of plastic detonators to the FBI. At that point, the FBI identified Brandon Mayfield. The FBI was wrong. Only through an uncanny sequence of events was the error discovered. The fact that it was a high-profile case was enough to influence the examiner to match more characteristics than were actually present. The initial positive identification of Mayfield, who was Muslim, was made by a respected examiner and not withheld during the secondary verification. It was revealed that overconfidence in the FBI lab, urgency of the case, and knowledge of Mayfield’s religious beliefs all led to the false verification, i.e., several contextual biases influenced the erroneous conclusion.

Another case is that of Stephan Cowans. He was identified as a suspect by someone who had sold him a hat. Subsequently, a latent fingerprint found in the home where a mother and daughter were taken hostage was misattributed to Cowans. It turns out that Cowans’ name was inadvertently left on the cards containing prints of the hostages used for elimination. A fingerprint examiner who discovered these issues failed to reveal the mistake throughout Cowans’ trial. All of these factors led to Cowans spending six years in prison for a crime he did not commit.

Digital forensic experts are not immune from the effects of contextual bias, either. A study was conducted wherein 53 examiners were given digital evidence in the form of a hard drive. Each expert found more or less evidence of guilt depending on the contextual information they were given. As devices such as phones, laptops, and flash drives are becoming increasingly integral to criminal investigations, the reliability of the forensics is being questioned more often. Ian Walden is a professor of information and communications law at Queen Mary University of London, and he believes there is a tendency to believe the machine. “[W]e need to be careful about electronic evidence. Not only should we not always trust the machine, we can’t always trust the person that interprets the machine.”

An Overview of Confirmation Bias

Confirmation bias is another frequently identified source of error in forensic science. Significant research has revealed that an “individual’s preexisting beliefs, motives, and situational context can indeed influence the collection and interpretation of evidence in criminal cases” and cause confirmation bias. People, even experts, will readily see and credit information which is consistent with currently held beliefs and opinions. The reverse is true as well.

Some of the recommended steps to avoid errors, especially confirmation bias, include: (1) training and participation in proficiency testing, (2) acceptance of bias (the most effective regulator of biases are an examiner’s objectivity and integrity, awareness of their biases and devising ways to correct them), (3) limiting daily pressures (the greater the pressure, the greater the chance for a mistake — there should never be an expectation for instant results), (4) remaining objective (“It takes a conscious effort by the analyst to remain impartial.”), (5) seeking to disprove (all possibilities need to be sought out, not just the ones confirming our beliefs), (6) limiting outside influence and leaving out all task-irrelevant materials, (7) using scientific protocols and established methodologies, and (8) limiting overconfidence (the statement “it could never happen to me” is troubling, and one should seek outside consultation without fear).

For centuries, researchers have recognized that bias can influence behaviors and thoughts. In 1620, philosopher Francis Bacon noted that once a person adopts an opinion, “they will look for anything to support and agree with that opinion.” Bacon also documented how people are moved more by positives than by negatives. In 1852, journalist Charles Mackay observed, “When men wish to construct or support a theory, how they torture facts into their service.” Psychologist Peter C. Wason completed a study in 1959, which found that “[c]onfirmation bias is perhaps the best known and most widely accepted notion of inferential error to come out of the literature on human reasoning.” In 2016, the President’s Council of Advisors on Science and Technology released a report that identified confirmation bias as a “serious issue in forensic science.”

Despite its known prevalence, a 2017 survey of forensic examiners found that each maintains a “bias blind spot.” Dror and a colleague asked 400 forensic scientists from 21 countries about this phenomenon. While 75% of experts recognized that irrelevant information could affect their analyses, and 52% saw it as a concern in their specialty, most (74%) ultimately denied it would affect their own conclusions — a case of bias for thee, but not for me.

There was a study wherein polygraphers were given information indicating that the suspect was guilty; each of them “interpreted the same polygraph charts as more incriminating than did other examiners who believed the suspect to be innocent.” Other studies have found that confirmation bias can affect the behavior and judgment of police interrogators, eyewitnesses, expert witnesses, and jurors. To prevent the cognitive contamination that results in confirmation bias, it has been suggested that knowledge of prior findings should be withheld until independent verification is complete. An examiner can be influenced by reading previous findings or even having a simple conversation with a colleague.

The researcher making the most prolific effort to combat these errors, biases, and the resulting miscarriages of justice is Dror. He has published several articles including, “Why Experts Make Errors”; “Contextual Information Renders Experts Vulnerable to Making Erroneous Identifications”; “Subjectivity and Bias in Forensic DNA Mixture Interpretation”; “Meta-Analytically Quantifying the Reliability and Biasability of Forensic Experts”; and, “Content Management Toolbox: A Linear Sequential Unmasking (LSU) Approach for Minimizing Cognitive Bias in Forensic Science.”

The concept of linear sequential unmasking(“LSU”) has been around for more than 60 years and is now part of the standard operating procedures for many forensic institutes and labs. LSU is used to restrict the flow of information to ensure that only “task-relevant” information is provided to the examiner. Information is “task-irrelevant” if it is not necessary for drawing conclusions about the propositions in question, or if it assists only in drawing conclusions from something other than physical evidence. LSU does have limitations. It can only be used for specific evidence analysis, i.e., “evidence for which it is possible to analyze the questioned material without knowledge of the reference material.” It works for evidence like fingerprints and DNA but not for bullet cartridge comparisons where you have to review the reference materials for striations and disagreements.

The American Association for the Advancement of Science recently published an article titled “The Bias Hunter” about Dror. “I don’t know anybody else who’s doing everything that Itiel [Dror] is doing,” said Bridget Mary McCormack, chief justice of the Michigan Supreme Court. McCormack has worked with Dror on a U.S. Department of Justice task force and collaborated with him on previous studies. McCormack added that, “His work is monumentally important to figuring out how we can do better. To my mind it’s critical to the future of the rule of law.” Not everyone is so enthusiastic.

In 2021, Dror created a fervor when he published an article in which he suggested that “forensic pathologists were more likely to pronounce a child’s death a murder versus an accident if the victim was Black and brought to the hospital by the mother’s boyfriend than if they were white and brought in by the grandmother.” The National Association of Medical Examiners (“NAME”) alleged that Dror had committed ethical misconduct and demanded that the college end his research. Eighty-five prominent pathologists called for its retraction. The editor of the JFS, which published the article, said he hadn’t witnessed so much anger or “so many arguments in the journal’s 65-year history.” But was Dror wrong?

Dror’s article published in the JFS was titled “Cognitive Bias in Forensic Pathology Decisions.” The question was whether confirmation bias and contextual influence can affect a forensic pathologist’s decisions. Researchers examined all death certificates for children under the age of six in the state of Nevada over a ten-year period. The study involved 133 forensic pathologists who were all American Board of Pathology Board-certified and members of NAME. Each was randomly assigned to either the “Black condition” or the “White condition.” The Black condition was that the child was Black and being cared for by the mother’s boyfriend. The White condition was that the child was white and being cared for by the grandmother.

Each pathologist was provided a vignette that described the case of a 3.5-year-old child who was presented to the emergency room after the caretaker found the child unresponsive on the living room floor. The examination vignette revealed that the child died with a skull fracture and hemorrhaging of the brain. Of the participants, 78 ruled the manner of death as “undetermined.” Of the remaining 55, 23 ruled it an “accident,” and 32 ruled it a “homicide.” In the Black condition, “the pathologists were about five times more likely to rule the death as a ‘homicide’ rather than an ‘accident.’” In the White condition, the results were the opposite. Dror simply identified real-world consequences of deficiencies in the process.

Here is an example of confirmation bias at its worst. On the night that Pamela Richards was murdered, police immediately suspected her husband, William. After he was identified as the suspect, police failed to secure the crime scene or fingerprint items that the killer would have handled. Police knew the killer had been in the couple’s motorhome, but they never collected prints prior to the motorhome being disposed of.

Police also attributed small blood stains on William’s clothing as “medium velocity impact spatter” (“MVIS”). William claimed that the small spots of blood on his shoes came from when he cradled his dead wife’s body in his arms. It would later be revealed that the small amount of blood was unlikely to have been MVIS based on the brutality of her murder and that the evidence actually corroborated William’s explanation. Other evidence against William included a bitemark and tufts of fiber found under Pamela’s fingernails that allegedly came from William’s shirt.

It would later be revealed that the bitemark did not match William’s dentition, and the original testimony was recanted. That expert wanted to “be the hero” and “finally convict” William after two trials. It was also revealed that the fiber evidence was completely manufactured by a county criminologist in an effort to link William to the murder. DNA evidence that had not been tested excluded William as the perpetrator. These facts were revealed only after William spent 23 years in prison.

David Camm came home to find his wife and daughters shot to death. High velocity impact spatter was used as evidence to hone in on Camm. Police went so far as to ignore “ample evidence that a sexual predator actually committed the murders.” One of the main pieces of evidence inexplicably ignored by police was a bloody sweatshirt found under one of the victims. The sweatshirt was prison-issued with the prisoner’s name written on the collar. Camm would spend 13 years in jail and embroiled in legal battles before being exonerated in 2013.

Bladimil Arroyo was the suspect in a Brooklyn murder. In 2001, at the time of Arroyo’s interview, police believed that the victim had been stabbed. After a lengthy interview, Arroyo provided a detailed confession as to how he had stabbed the victim. It was later determined that the victim was shot. Arroyo had learned facts (as the police believed them to be at the time) about the crime from the detectives and turned this into a confession. The confession “confirmed” the police’s initial, but incorrect, theory about the crime. Incredibly, even after this was revealed, the state of New York tried to prosecute Arroyo using his confession.

When Experts Are the Problem

Experts are supposed to be trusted and reliable. What happens when experts are the problem? William Lewinski works for the Force Science Institute. Lewinski also “trains police to shoot first and quickly.” Then, he defends law enforcement as an expert witness. The New York Times featured an article about Lewinski’s perspective on “inattentional blindness.” This is when a person becomes focused on a single thing in a complex environment and fails to see “something unusual and unexpected.”

Critics argue that Lewinski misrepresents this concept of inattentional blindness when defending cop killings. Lewinski argues that this blindness causes the police to become unaware that the “the person they are shooting is not a threat.” Critics assert that this is patently false. One researcher said, “Simply stated, Lewinski’s conclusions on the role of inattentional blindness in police shootings and memory cannot be justified by the existing scientific data.”

A new study by the University of Washington revealed that over the last 40 years, experts have undercounted police killings in the United States. It is estimated that between 1980 and 2018, physicians who worked as government medical examiners or coroners “missed or covered up more than 17,000 police killings.” In a 2011 survey, 22% of such physicians “reported pressure from government officials to change the cause or manner of death on a certificate.”

A letter from 74 pathologists lashed out at the study. Their bizarre open letter stated that “Manner determination [e.g., homicide, suicide, accident, natural or undetermined] is not a ‘scientific’ determination. It is a cultural determination that places a death in a social context for the purpose of public health statistics.” The 74 pathologists tried to argue that there is no “right answer” in manner of death determinations and argued that the goal is “consistency rather than some nonexistent criteria for correctness.” Their attempt to have the study censored and retracted failed, but one could question their ideology.

The Association for Psychological Science (“APS”) released an article entitled “Are Forensic Experts Biased by the Side that Retained Them?” that addresses this issue. The idea that an expert can be influenced by the group they identify with (i.e., the side who retained them) has been a long-standing concern within our adversarial legal system. The APS paid 108 forensic psychologists and psychiatrists to review offender files. Some of the experts were told they were working for the defense, while the rest were told they were working for the prosecution. The experts were tasked with scoring the offenders on two common risk-assessment instruments, the Static-99 and the Psychopathy Checklist-Revised, both commonly used in sex offender risk analysis. The results revealed that overall risk scores were higher for the prosecution and lower for the defense. According to the APS, this arguably shows a “clear pattern of adversarial allegiance.”

A prime example of this is former Maryland chief medical examiner David Fowler. He is most noted for testifying at the trial of Derek Chauvin. Disagreeing with the Hennepin County, Minnesota, medical examiner, Fowler stated that George Floyd died by “drug use and his poor health,” not because of Chauvin’s conduct. This was “eerily similar” to a case in the fall of 2018, when officers encountered Anton Black after a report of a possible kidnapping. The 19-year-old was wrestled to the ground and later died. No officers were charged. Fowler assisted with the ruling that Black’s death was an accident. Black’s sister, LaToya Holley, said of Floyd’s death, “[i]t just seems to keep happening.” After it became known that Fowler would testify in the Chauvin trial, LaToya added, “[W]hy wouldn’t they want him to testify in his defense?”

Fowler is now under intense scrutiny after the Maryland state medical examiner’s office released a list of 1,313 in-custody death determinations that Fowler was involved with and that are now being reviewed. These deaths occurred between 2003 and 2020. It appears that all were ruled “accidents.” A statement from the Office of the Attorney General (“OAG”) for the state of Maryland was released on behalf of Attorney General Frosh in May 2021. It read in part that “The decision to conduct the audit followed a request by hundreds of medical professionals to undertake a review of Dr. Fowler’s work in the wake of his testimony in the trial of Derek Chauvin.”

There is also the case of Mary Hong, who worked with the Orange County, California, Crime Lab for over 30 years. Hong was a senior forensic scientist and DNA supervisor, and she now works for the California Department of Justice. Orange County Assistant Public Defender Scott Sanders said Hong was the “crime lab closer” for many felony cases. Sanders is asking for a review of all of her cases as well, after it was revealed that Hong lied and misrepresented evidence in the murder trial of Sanders’ client, Daniel McDermott. Sanders said, “It will only get to the many cases that she impacted through her testimony if there is a meaningful review of past cases.” It is alleged that Hong “got rid of all the red flags” so she could identify McDermott as the suspect.

Forensic scientist Henry Lee has testified in some of the most notorious criminal cases, including those of JonBenét Ramsey and O.J. Simpson. Lee’s first real case was in 1989, when he testified for the prosecution against two teenage homeless murder suspects in a Connecticut trial. In what Lee called a “burglary gone bad,” he said that a stain on a towel was “identified to be blood.” It was a lie. The stain had never been tested. Later tests revealed that it was, in fact, not blood. The fact that Lee had never actually tested the stain, and then lied about it, led to the exoneration of the two men after serving 30 years in prison for a crime they didn’t commit.

Lee is also accused of hiding evidence in the murder case of L.A. actress Lana Clarkson; lying about a non-existent blood type in the murder of Janet Myers; stating that evidence was human blood in the trial of Joyce Stochmal, where once again it was alleged that Lee “knew or should have known” it was not human blood.

An attorney for the two men wrongly convicted in 1989, Darcy McGraw of the Connecticut Innocence Project, said, “His testimony has led to some very unfair and unjust results.” The former prosecutor in the O.J. Simpson trial, Christopher Darden, commented on Lee’s blood sample testimony: “I didn’t think it was true then — and I don’t think it’s true today. It was bullshit, not science.” Lee’s response to these accusations is to claim he has never been previously accused of wrongdoing in over 8,000 cases. Lee said, “If they are in fact innocent, I’m happy for them … But who is going to speak for the victim?” Perhaps Lee should let the truth speak for both the accused and the victims.

Suggested Improvements

The NRC noted that any attempt to overhaul the forensic science laboratory system and processes “needs to address and help minimize the community’s current fragmentation and inconsistent practices.” A major problem is that the majority of criminal law enforcement is handled by local and state jurisdictions. These entities are often lacking in the resources needed to promote and maintain effective forensic science laboratory systems. The NRC notes that federal systems are often much better funded and staffed. The extent of services and expertise provided varies widely across jurisdictions. This causes a substantial variation across the country of “the depth, reliability, and overall quality of substantive information arising from the forensic examination of evidence available to the legal system.”

The fragmentation problem is exacerbated by the lack of standardization either between or within jurisdictions. There is no national accreditation of crime laboratories or uniformity in the certification of experts. Some jurisdictions don’t require certification at all, and many disciplines don’t have mandatory certification programs, anyway. There are no standard protocols in a given discipline that govern forensic practices. When protocols are in place, they are often vague and lack meaningful enforcement measures. According to the NRC, such issues “obviously pose a continuing and serious threat to the quality and credibility of forensic science practice.”

Another issue arising is that many programs function within the regulatory province of local and state agencies. So, when new procedures are implemented to assess and improve the strengths, weaknesses, and future needs of forensic disciplines, we must be mindful that the federal government cannot fix all of the identified deficiencies. What Congress can do through its authority, though, is significant. Congress can promote what the NRC calls “best practices.” This includes developing and implementing strong accreditation, educational, ethical, certification, and oversight programs in the individual states. This can be done through contingency, or “strings attached,” funding that can incentivize states to adopt standards and practices meant to ensure consistency and accountability in the processes.

There is currently no independent federal agency that is tasked with these goals. The NRC believes that Congress should create the National Institute of Forensic Science (“NIFS”). NIFS would be required to meet several minimum criteria as its operating principles. First, there must be a culture deeply rooted in science, with strong ties to national research and teaching communities. Second, there must be working relationships between local and state forensic entities and professional organizations. Third, NIFS shall not be committed to any existing system but must be fully informed by these identified failures. Fourth, it must not be part of any law enforcement agency. The NRC is not the only organization that recommends autonomy as a solution to many of these problems. Finally, it must have substantial funding independence and prominence to effect meaningful change and be led by skilled and experienced persons willing to execute national improvement strategies.

It is also important that standardized terms are used by forensic experts. Language that describes findings, conclusions, and degrees of association in the matches should be consistent. Terms such as “match,” “consistent with,” “identical,” “similar in all respects tested,” and “cannot be excluded as the source of” must have exact meanings. The use of these terms typically has a “profound effect on how the trier of fact in a criminal or civil matter perceives and evaluates scientific evidence.”

Lab reports generated from scientific analysis should be complete and thorough. The NRC believes that each report must contain a minimum of identified “methods and materials,” “procedures,” “results,” “conclusions,” and “sources and magnitudes of uncertainty in the procedures and conclusions (e.g., levels of confidence).” This newly implemented operational framework would bolster the ability to identify the true perpetrators and reduce the likelihood of wrongful convictions.

Conclusion

The first step is to admit you have a problem: There is a clear problem with the forensic science processes currently in use. Cognitive biases and the resulting errors lead to far too many wrongful convictions. Fortunately, many forensic methods are being declared unreliable and courts are disallowing their use. There must be a concerted effort by the entire forensics community to continue identifying problems and implementing meaningful changes. Researchers must follow the lead of those like Dror, who are committed to solving this crisis. No person should be placed in jeopardy of spending decades in prison for crimes they didn’t commit. It also does a tremendous disservice to the victims when the actual perpetrator is not identified or apprehended. States are already responding by changing laws to allow for relief based on new analyses of old forensic evidence for those wrongfully convicted. Defense attorneys are becoming more adept at recognizing and challenging flawed forensic evidence and methods. We can only hope this trajectory continues. 

Sources: A Review of Contextual Bias in Forensic Science and its potential Legal Implications, Nikkita Venville (2010); Autopsy of a Crime Lab: Exposing the Flaws in Forensics, Brandon L. Garrett (2021); The Bias Snowball and the Bias Cascade Effects: Two Distinct Biases that May Impact Forensic Decision Making, Itiel E. Dror, PhD, Ruth M. Morgan, D.Phil., Carolyn Rando, PhD, and Sherry Nakhaeizadeh, M.Res. (2017); Biases in forensic experts, Itiel E. Dror (2018); boisestate.edu; californiainnocenceproject.org; Cognitive bias in forensic pathology decisions, Itiel Dror PhD, Judy Melinek MD, Jonathan L. Arden MD, Jeff Kukucka PhD, Sarah Hawkins JD, Joye Carter MD, PhD, Daniel S. Atherton MD (2021); Cognitive bias in forensic pathology decisions, Itiel Dror PhD, Judy Melinek MD, Jonathan L. Arden MD, Jeff Kukucka PhD, Sarah Hawkins JD, Joye Carter MD, PhD, Daniel S. Atherton MD (2021); The effect of contextual information on decision-making in forensic toxicology, Hilary J. Hamnett, Itiel E. Dror (2020); Experts bodies, experts minds: How physical and mental training shape the brain, Ursula Debarnot, Marco Sperduti, Franck Di Rienzo, Aymeric Guillot (2014); forensic-pathways.com; Forensic Science and the Administration of Justice: Critical Issues and Directions, Kevin J. Strom and Matthew J. Hickman (2015); Forensic Science Handbook, Volume I, Adam B. Hall and Richard Saferstein (2020); Forensic Science Regulator Guidance: Cognitive Bias Effects Relevant to Forensic Science Examinations, Forensic Science Regulator (2020); georgiainnocenceproject.org; The Guardian; LA Times, Lessons Learnt: Contextual Bias in Forensic Toxicology, Forensic Science Regulator (2019); The National Registry of Exonerations - law.umich.edu; prindleinstitute.org; The Psychology and Sociology of Wrongful Convictions: Forensic Science Reform, Wendy J. Koen and C. Michael Bowers (2018); psychologytoday.com; REPORT TO THE PRESIDENT: Forensic Science in Criminal Courts: Ensuring Scientific Validity of Feature-Comparison Methods, Executive Office of the President President’s Council of Advisors on Science and Technology (2016); science.org; Scientific American; Status and Needs of Forensic Science Service Providers: A Report to Congress, National Institute of Justice (2006); Strengthening Forensic Science in the United States: A Path Forward, National Research Council (2009); Washington Post; WJLA

As a digital subscriber to Criminal Legal News, you can access full text and downloads for this and other premium content.

Subscribe today

Already a subscriber? Login

 

 

The Habeas Citebook: Prosecutorial Misconduct Side
Advertise here
BCI - 90 Day Campaign - 1 for 1 Match