Tenth Annual NRC Report Magnifies Limits of Forensic Evidence
by Bill Barton
Steven Mark Chaney was convicted of a 1987 murder based partially on the testimony of forensic scientists, which linked him to the crime by a bite mark found on the victim’s skin. The court heard that Chaney’s teeth were a “perfect match,” and that there was a “one in a million” chance that someone else was the biter.
The jury was therefore convinced of his guilt, but Chaney maintained that he was innocent. His conviction was overturned in October 2015, and subsequently he was released from prison. Then, in December 2018, the Texas Court of Criminal Appeals (“TCCA”) found Chaney “actually innocent” of the crime.
Actually innocent but imprisoned for 28 years. Why? What changed in that period of time, almost three decades? The somewhat over-simplified answer is that the “science” itself changed.
TCCA found that the “body of scientific knowledge underlying the field of bitemark comparisons has evolved since [Chaney’s] trial in a way that contradicts the scientific evidence relied on by the State at trial” and that “testimony of the sort given at Chaney’s trial is now known to be scientifically unsupportable because it ‘went too far.’”
The February 2009 National Academy of Sciences (“NAS”) report, “Strengthening Forensic Evidence in the United States: A Path Forward,” is central to arguments against bite-mark evidence, such as that which convicted Chaney.
Sarah Chu, a senior adviser on forensic science policy for the Innocence Project, said the NAS report was a “pivotal moment” and that it was “a consensus report … by an august scientific body.”
“Much forensic evidence — including, for example, bite marks and firearm and toolmark identifications — is introduced in criminal trials without any meaningful scientific validation, determination of error rates, or reliability testing to explain the limits of the discipline,” the report said.
On bite marks in particular, it said, “Although the majority of forensic odonatologists are satisfied that bite marks can demonstrate sufficient detail for positive identification, no scientific studies support this assessment, and no large population studies have been conducted.”
The report said, “only nuclear DNA analysis has been rigorously shown to have the capacity to consistently, and with a high degree of certainty, demonstrate a connection between an evidentiary sample and a specific individual or source.”
Regarding hair analysis, the report said, “No scientifically accepted statistics exist about the frequency with which particular characteristics … are distributed in the population,” and that there appeared to be “no uniform standards on the number of features, which … must agree before an examiner may declare a ‘match.’”
How about fingerprints, which have long been considered as “exact means of associating a suspect with a crime scene print.”
The question is less a matter of whether each person’s fingerprints are permanent and unique — uniqueness is commonly assumed — and more a matter of whether one can determine with adequate reliability that the finger that left an imperfect impression at a crime scene is the same finger that left an impression (with different imperfections) in a file of fingerprints.”
The NAS report brings together in one place a number of criticisms that have arisen over the years in a variety of places. As Science editor-in-chief Donald Kennedy wrote in his 2003 article titled, “Forensic Science: Oxymoron?”: “It’s not that fingerprint analysis is unreliable. The problem, rather, is that its reliability is unverified either by statistical models of fingerprint variation or by consistent data on error rates. Nor does the problem with forensic methods end there. The use of hair samples in identification and the analysis of bullet markings exemplify kinds of ‘scientific’ evidence whose reliability may be exaggerated when presented to a jury.”
Alicia L. Carriquiry, distinguished professor of statistics at Iowa State University, became involved with forensic science in 1998 when she and Hal Stern were asked by the FBI to study bullet lead analysis, a method examiners used to compare the chemical composition of lead in bullets recovered from a crime scene with lead in bullets found in the possession of a suspect, and if the chemical compositions of the two samples were indistinguishable, according to thresholds determined by the FBI, then the examiner would say that the two bullets came from the same box of ammunition.
“Hal and I looked at this data and we said, ‘Wait a minute, you are forgetting something called the probability of a coincidental match,’” said Carriquiry.
“‘It’s not just that the chemical composition is indistinguishable, you need to think about how many other bullets would have an indistinguishable chemical composition, but come from different boxes [of ammunition].’” The FBI later discontinued use of this type of evidence.
An April 2019 Significance article about forensic science said, “… there is a key message … of the 2009 NAS report. The message is this: Think carefully about the value of any piece of evidence, and express that value in a way that does not mislead those responsible for establishing the facts of a case.”
The line between science and pseudoscience is a thin one and more than likely not a straight line.
Source: Significance (significancemagazine.com)
As a digital subscriber to Criminal Legal News, you can access full text and downloads for this and other premium content.
Already a subscriber? Login