Weak Forensic Science Has High Cost
~Steve Pierson, ASA Director of Science Policy, email@example.com
Clifford Spiegelman is a distinguished professor of statistics at Texas A&M, where he has been on faculty for 23 years. He is also a senior research scientist at the Texas Transportation Institute. His applied research interests include chemometrics, transportation statistics, environmetrics, and statistical forensics.
Forensic science is perceived by many as the “magic bullet” that links evil deeds to specific people. This is the story line of movies, television shows, and books. Few forensic examiners or technicians in these shows make errors, and whenthey are made, they are caused by ‘rogue’ forensic examiners or over-zealous technicians who do not follow accepted procedures.
In the real world, forensic science is used to determine occurrences and reconstruct crimes. It is used to identify suspects and possible crime scenes and to eliminate others. It also is used in criminal trials and appeals. Key to forensic science’s use in the real world is the confidence that law enforcement, the judicial system, and society at large place in it.
As currently constructed, however, the practice of forensic science should largely get a no-confidence vote, with the possible exception of DNA evidence (even though the scientific community has yet to be allowed access to the DNA database.) Indeed, Michael Saks and Jonathan Kohler, in a 2005 Science review article, name forensic science testing errors as a contributing factor in 63% of the wrongful convictions in 86 DNA cases studied by the Innocence Project. False or misleading statements by forensic examiners contributed to 27% of the false convictions studied.
The painful truth is that nearly all forensic procedures have been developed without much involvement from the statistical community or enough involvement from the independent, university-based scientific community or federal research labs.
As a result, forensic results are typically stated with uncertainty statements that cannot be supported. For example, it is typical in firearm toolmark identifications to state that, to a practical certainty, the defendant’s gun fired the bullets found in a decedent. Two recent National Research Council (NRC) reports (Strengthening Forensic Science in the United States:A Path Forward and Ballistic Imaging) conclude there is no statistical foundation for such an absolute statement. Also, some federal and state jurisdictions recently ruled that firearm toolmark examiners may only testify that it is more likely than not that the defendant’s gun fired the bullets found in a decedent. (See State of Ohio v. Anderson (pdf) and U.S. v. GLYNN.) That is, the courts require only a better than 50-50 chance of a match.
The broader scientific community has noted the blatant failures of forensic science, but the justice system has not paid careful enough attention. In a December 3, 2003, editorial, Science editor-in chief Donald Kennedy wrote, “It’s not that fingerprint analysis is unreliable. The problem, rather, is that its reliability is unverified either by statistical models of fingerprint variation or by consistent data on error rates. Nor does the problem with forensic methods end there. The use of hair samples in identification and the analysis of bullet markings exemplify kinds of ‘scientific’ evidence whose reliability may be exaggerated when presented to a jury.” Other points of view, mostly from within the forensic science community, are more sympathetic to the current state of forensic science.
I and several colleagues recently showed that the House Select Committee on Assassinations’ (HSCA) compositional bullet lead analysis (CBLA) study in the JFK investigation was seriously flawed. The two other assassinations studied by the HSCA were those of Martin Luther King Jr. and Sen. Robert F. Kennedy. I have looked at the forensic aspects of both investigations. In the Martin Luther King Jr. investigation, forensic science likely could have provided more help, and the issue can still be addressed. The practice of firearm toolmark examiners then (and now for the most part) was to attribute a weapon to a crime scene to a practical certainty. Other possible decisions by the examiners are that the weapon was certainly not involved in the crime (a finding that is rarely made if the brand of weapon is a possibility) or that the findings are inconclusive.